Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Temperature rise as K? 11

Status
Not open for further replies.

Jakelian

Industrial
May 24, 2009
36
TR
Dear users,

In several places of the heating sections of Euro (EN) standards, there are several tables showing 'temperature rise as K'. For example: "During normal operation, the temperature rise of the applience's walls shall not exceed 140 K". What does 'temperature rise as 140 K' mean?

Thanks!
 
Replies continue below

Recommended for you

Most IEC standards use a maximum ambient temperature of 40C as the basis for calculating the permissible temperature rises, thus (for example) a Class B insulation system permits a 90K rise on a 40C ambient giving a maximum service temperature of 130C.


----------------------------------
image.php

If we learn from our mistakes I'm getting a great education!
 
I do not see any point in using K, however correct it could be on techncial basis not relevant to day to day life.

If the ambient temperature is indicated in C (or F) there is not sense in indicating the rise in K. There was no confusion over using C or F for decades, why mess with it.

Keep the K for scientific journals and conversion tables. Normal applications do not need it.


Perhaps there would be no issues, if we say the floor to floor to height is 15 feet and the building should be 50 meters tall, the tile sizes 12"X 30 cm....





Rafiq Bulsara
 
IEEE 1 – 2000 refers to temperature rises in degrees C (not degrees K).

That is just a datapoint to counter the IEC reference. As I said above, it seems to me somewhat pointless to argue about which unit someone happens to think is better… with a very small amount of thinking we should be able to cope with whatever units the temperature rise is specified in.


=====================================
Eng-tips forums: The best place on the web for engineering discussions.
 
ePete,

I would give that reference much greater credibility if it was published by a nation which doesn't stubbornly cling to a modified Imperial measurement system... [smile]


----------------------------------
image.php

If we learn from our mistakes I'm getting a great education!
 
LOL. I can’t argue that point. I will forgo the usual furlongs per fortnight joke.

=====================================
Eng-tips forums: The best place on the web for engineering discussions.
 
There is a reason, most engineers are happily invited to meetings in job trailers but are carefully avoided in upper management level meetings with CEOs, VPs or board rooms or even at important marketing presentations!!



Rafiq Bulsara
 
Dear bashar2008, burnt2x, and GTstartup

Thanks a lot for all the valuable effort and data.

Best regards...
 
Skogsgurra:

I am pretty sure that 0 K is absolute zero. That is how we define what absolute zero is.
 
Ya, I always though 0 K was defined as absolute zero. This also corresponds to ?273.15° Celsius or ?459.67° Fahrenheit. If that's not true then there are a lot of people doing it wrong.

I like ElectricPete's answer. Since we can't all agree on common units then we need to be able to deal with what is given.
 
No marks1080. You put the chart before the horse. The strict definition of the degree kelvin is: "the fraction 1/273.16 of the thermodynamic temperature of the triple point of water. The definition was taken by the CGPM in 1967.

The definition says nothing about a reference temperature. It is exactly the same thing as the definition of the metre - it is defined as a fraction of the earth's quadrant. But that does not mean that is neccessarily has to start at the North Pole (or Equator).

Re: "If that's not true then there are a lot of people doing it wrong" Yes. But most people think it is right - remember the saying "Ten thousand flies cant be wrong - eat sh-t" Same thing here.



Gunnar Englund
--------------------------------------
100 % recycled posting: Electrons, ideas, finger-tips have been used over and over again...
 
skogs,

Why would they pick a rather inconvenient number like 273.16 if it was for some purpose other than to place the zero point of the Kelvin scale at absolute zero?


----------------------------------
image.php

If we learn from our mistakes I'm getting a great education!
 
You still do not get it!

It is a unit. Not part of a scale.

I do not say that it isn't based on the absolute zero - it just doesn't start there. It starts nowhere. That is why it has been chosen for temperature deltas. C and F are for temperatures as such. But not for differences. It is about scientific prnciples.

Why is this so dificult to understand?

Gunnar Englund
--------------------------------------
100 % recycled posting: Electrons, ideas, finger-tips have been used over and over again...
 
They picked 1/273.16 so that the delta T associated with 1K would be exactly the same as the delta T associated with 1C.
 
Scogscurra,
I do believe you are mistaken. In my thermodynamics and aerospace classes way back when, it was defined as 0degK=absolute zero. See:


I don't think that has changed since the mid-seventies.

EEJaime
 
skogsgurra said:
You still do not get it!

It is a unit. Not part of a scale.

I think we understand perfectly well the difference between a unit and a scale. Kelvin can be used in both ways as you say: as a unit/increment (the first sentence below) and as a temperature scale (the third sentence below):
Wiki said:
The kelvin (symbol: K) is a unit increment of temperature and is one of the seven SI base units. The kelvin (symbol: K) is a unit increment of temperature and is one of the seven SI base units. The Kelvin scale is a thermodynamic (absolute) temperature scale where absolute zero, the theoretical absence of all thermal energy, is zero (0 K).

But it should be noted that Celcius can be used in the same way as either a scale (first sentence below) or an increment (second sentence below)
Wiki said:
Celsius (also known as centigrade) is a temperature scale that is named after the Swedish astronomer Anders Celsius (1701–1744), who developed a similar temperature scale two years before his death. The degree Celsius (°C) can refer to a specific temperature on the Celsius scale as well as serve as a unit increment to indicate a temperature interval (a difference between two temperatures or an uncertainty).
If Wikipedia is not a solid reference, please consider this reference: “Applied Dimensional Analysis and Modeling” by Thomas Szirtes and P. Rózsa
It is a book that delves into the restructuring of problems into dimensionless variables, guessing the solution to an analytic problem by unit analysis, etc. Chapter 3 is a detailed discussion of unit systems in use. The authors presumably know just a little bit about units. (*)

On Page 287 they analyze a thermal growth problem using a difference in temperatures. And what unit is used for the delta-T? Degrees C. See for yourself.

(* they are Canadian – does that make a difference?)


=====================================
Eng-tips forums: The best place on the web for engineering discussions.
 
The OP asked what the temperature difference expressed in K. He got a valid answer and then protested that 140 K = -133 C.

That is where the confusion started. It is important to understand the difference between a measure with a reference and a delta measurement. The former has a zero point, be it center of Paris or the triple point of water. The latter does not have a zero point. It just says how long or how long time something takes - or how big a temperature rise is.

That is all there is to it. I do not deny at all that K is sometimes used to express temperature. But then, it should be used with words saying what reference is used. Common usage is absolute zero. But absolute zero is NOT defined as 0 K. The definition is that all molecular movement has stopped. Thermal energy is zero.

Gunnar Englund
--------------------------------------
100 % recycled posting: Electrons, ideas, finger-tips have been used over and over again...
 
I agree that on a "scale" the increment K=C, and hence there is no benefit or additional technical accuracy added by expressing ambient in C and the rise in K.

Unless it changed in last 30 years, we were taught the absolute zero happens at 0 K.

Oh and that brilliant definition of meter being some fraction of the earth's quadrant is another example of going overboard with "technical" definitions. For some reason the earth contracts or expands, which it certainly can, all the distance measurement in meters will be wrong!! Don't they have a master meter stick in Paris? or is that the mass of a kg?

Rafiq Bulsara
 
Well if "absolute zero is NOT defined as 0 K" then 0 K is defined to be absolute zero.

I think everyone had the point about delta temperatures being differences in temperature before anyone bothered to point that obvious fact out.

I am not getting your point about zero K not being absolute zero... I will have to go back to my graduate studies prof and correct him on this.

Keith Cress
kcress -
 
I think is a matter of cause and effect. Absolute zero is defined as that point at which there is no thermal motion. Doesn't matter what temperature scale you might use, absolute zero doesn't care. The Kelvin scale happened to be set up such that the increment from one degree to the next was the same as the Celsius (probably centigrade scale at the time) degree increment and that the zero point would be at absolute zero.

Absolute zero defines 0K, 0K does not define absolute zero.

On the other hand this all way too much ado about nothing. Anybody who can't recognize that a 140K rise is exactly the same thing as a 140C rise needs to go back and brush up on basic units and anybody that would write a standard that has ambient in C and rise in K needs to get their head back into somewhere that the sun might shine.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top