We are searching data for your request:
Upon completion, a link will appear to access the found materials.
I have noticed that in infrared film, people's eyes reflect that light within their eyes much like nocturnal animals (and many other animals for that matter) do with regular light. I know that this is in order to maximize their night vision capabilities by reflecting more light onto the retina.
My question is this: If a human had the nerves in our eyes to process infrared light, would someone with these 'special' eyes be able to see better than someone with 'regular' eyes? What would they see? What about in low-light conditions?
… Infra-red illumination of the eye produces the 'bright pupil' effect
(most of the way down the page)
No, retina doesn't have or has too few infrared receptors.
Practical example: most remote controls use IR LEDs (infra red light emitting diodes) to send commands to a receiver device. So take a remote control, keep pressing a button and look at its front. Do you see any flashing light? Now take a camera (phone camera, photo camera or video camcorder) and look with it at the remote while pressing a button. Now you see it producing white light. The camera sensor detects IR. That is how night vision in cameras work.
What you read in that article is about how light is reflected by the eye (the pupil). There is nothing to do with the retina which is on the 'back wall' of the eye ball.
Infrared light is emitted by warm things, like our head. We'd be detecting it ambiantly all the time. That's why animals that can detect infrared are not warm-blooded.
Benefits of infrared sauna and cancer patients
Dr. Hernandez graduated from medical school at Universidad Autonoma Metropolitana, Mexico City in 1982 and has more than 20 years' experience as a treating physician, consultant, and admissions manager at integrative clinics.
[Click To Read More]
Sauna is not a recent invention of mankind. In fact, the cradle of sauna is far from the luxurious spas from today. Ancient civilizations from all parts of the world once made use of rudimentary facilities, with the aim of raising body temperature (hyperthermia) to promote profuse sweating, aware that this powerful mechanism could activate healing processes in the body.
The healing effects of sauna are mainly derived from the activation of the parasympathetic system.
Science has further proven several benefits of regular use of sauna and in particular the use of the infrared sauna, which uses infrared radiation (or light) to increase temperature. It also has more therapeutic effects when compared to steam. 
An integrative approach to treating cancer will never be complete without a detoxification program. Among the different ways to eliminate waste from our body, sweat plays a key role. In fact, it has been said that the skin is the main detoxification organ in our body. Heavy metals, phenols, phthalate, medications of chronic use, and many other cancer-causing chemicals are expelled out through our pores when we sweat.
Speeds recovery processes
The use of sauna increases IGF-1, a hormone vital for growth and essential in recovery processes. One study found that IGF1 increased by 142% during the use of an infrared sauna. 
Another study detected a 5-fold increase in Human Growth Hormone (HGH) levels in just two 15-minute infrared sauna sessions per week. 
Improves blood flow
Cancer cells grow and reproduce better in low oxygen environments. Sauna therapy, however, allows for greater absorption of nutrients and oxygen in the areas and organs of the body affected by cancer, thereby making the cancer cells more labile to treatments.
Improves mood and decreases stress levels
It is well known that a bad attitude and negative feelings adversely affect your immune system and promote the development of the disease. Cancer is very much linked to stress and depression. Therefore, the proper treatment of these conditions is essential in the context of an integrative cancer treatment program.
The use of sauna increases the number of beta-endorphins, which provokes a certain feeling of euphoria or happiness. Thus, heat therapy throughout the body has been shown to improve the symptoms of depression in patients with cancer. 
World-first 3D bionic eye could enable superhuman sight, night vision
The human eye is an incredibly complex piece of equipment, so it’s no wonder that we’ve had a hard time reverse engineering it. Now, researchers have unveiled the world’s first 3D artificial eye, which can not only outperform other devices but has the potential to see better than the real thing.
Bionic eyes are emerging as a way to restore vision to people who have lost their sight, and possibly even those that never had it to begin with. Currently the most advanced versions are those from companies like Bionic Vision Australia and Second Sight, which have both already been implanted into patients.
Both of these devices take the same basic form, starting with a pair of glasses with a camera in the center. The data from that is processed by a small unit worn outside the body, then sent to an implant on the user’s retina. From there, the signals are transmitted to the visual centers of the brain.
And they work. Users have reported being able to see flashes of light again, for the first time in years. Unfortunately, this vision isn’t clear enough for them to rely on to navigate the world, and other studies have shown that these kinds of bionic eyes might produce streaky images and are too slow to capture fast movements.
But this new device could herald a huge improvement. A team led by scientists at the Hong Kong University of Science and Technology (HKUST) has developed what they call the Electrochemical Eye (EC-Eye).
A cross section of the makeup of the Electrochemical Eye (EC-Eye)
Rather than using a two-dimensional image sensor like a camera, the EC-Eye is modeled after a real retina with a concave curve. This surface is studded with an array of tiny light sensors designed to mimic the photoreceptors on a human retina. These sensors are then attached to a bundle of wires made of liquid metal, which act as the optic nerve.
The team tested the EC-Eye and showed that it can already capture images relatively clearly. It was set up in front of a computer screen displaying large individual letters, and it was able to display them clearly enough to be read.
Although it’s a huge improvement over existing bionic eye designs, the EC-Eye’s vision still falls far short of a natural human eye. But, the team says, this might not be the case forever. The technology has the potential to outshine the real thing, by using a denser array of sensors and attaching each sensor to an individual nanowire. The team even says that using other materials in different parts of the EC-Eye could bestow users with higher sensitivity to infrared – essentially, night vision.
Of course, there’s still plenty of work to do in future, but the EC-Eye looks promising.
The research was published in the journal Nature. The device can be seen nailing its eye test in the video below.
Theoretically, two layers are better than one for solar-cell efficiency
Schematic of a double thin film layered solar cell. The sun enters at the top and reaches the CIGS and CZTSSe layers that absorb the light and create positive and negative particles that travel to the top and bottom contact layers, producing electricity. Credit: Akhlesh Lakhtakia, Penn State
Solar cells have come a long way, but inexpensive, thin film solar cells are still far behind more expensive, crystalline solar cells in efficiency. Now, a team of researchers suggests that using two thin films of different materials may be the way to go to create affordable, thin film cells with about 34% efficiency.
"Ten years ago I knew very little about solar cells, but it became clear to me they were very important," said Akhlesh Lakhtakia, Evan Pugh University Professor and Charles Godfrey Binder Professor of Engineering Science and Mechanics, Penn State.
Investigating the field, he found that researchers approached solar cells from two sides, the optical side—looking on how the sun's light is collected—and the electrical side—looking at how the collected sunlight is converted into electricity. Optical researchers strive to optimize light capture, while electrical researchers strive to optimize conversion to electricity, both sides simplifying the other.
"I decided to create a model in which both electrical and optical aspects will be treated equally," said Lakhtakia. "We needed to increase actual efficiency, because if the efficiency of a cell is less than 30% it isn't going to make a difference." The researchers report their results in a recent issue of Applied Physics Letters.
Lakhtakia is a theoretician. He does not make thin films in a laboratory, but creates mathematical models to test the possibilities of configurations and materials so that others can test the results. The problem, he said, was that the mathematical structure of optimizing the optical and the electrical are very different.
Solar cells appear to be simple devices, he explained. A clear top layer allows sunlight to fall on an energy conversion layer. The material chosen to convert the energy, absorbs the light and produces streams of negatively charged electrons and positively charged holes moving in opposite directions. The differently charged particles get transferred to a top contact layer and a bottom contact layer that channel the electricity out of the cell for use. The amount of energy a cell can produce depends on the amount of sunlight collected and the ability of the conversion layer. Different materials react to and convert different wavelengths of light.
"I realized that to increase efficiency we had to absorb more light," said Lakhtakia. "To do that we had to make the absorbent layer nonhomogeneous in a special way."
That special way was to use two different absorbent materials in two different thin films. The researchers chose commercially available CIGS—copper indium gallium diselenide—and CZTSSe—copper zinc tin sulfur selenide— for the layers. By itself, CIGS's efficiency is about 20% and CZTSSe's is about 11%.
These two materials work in a solar cell because the structure of both materials is the same. They have roughly the same lattice structure, so they can be grown one on top of the other, and they absorb different frequencies of the spectrum so they should increase efficiency, according to Lakhtakia.
"It was amazing," said Lakhtakia. "Together they produced a solar cell with 34% efficiency. This creates a new solar cell architecture—layer upon layer. Others who can actually make solar cells can find other formulations of layers and perhaps do better."
According to the researchers, the next step is to create these experimentally and see what the options are to get the final, best answers.
10 Answers 10
The simple answer is that they are using near IR. LED manufacturers have a good handle on how to make them so they are affordable.
Their center frequencies may be invisible to the M-1 eyeball (i.e. human eye), but unless they put a filter in front of the LEDs (which cause them to produce less illumination) there will be some of it that you can see.
The effect is minor. Basically, to see it you must look directly at the emitter. You're not going to see it in reflections or scene illumination.
Far-IR is completely invisible. But a whole lot more expensive because the manufacturing process is different.
Near-IR emitters are mass-produced. Far-IR not so much.
IR lasers are another story. They emit on a single frequency, so there is no gaussian curve describing their output in the frequency domain. They are so invisible that they can be dangerous. Working around lab CO $<>_2$ lasers, for instance, requires removal of all jewelry and controlling the beam. They will not trigger a blink response so you can sustain a lot of damage in a short time and not know it right away.
The transition from visible wavelengths to invisible is not infinitely abrupt. Your eye's sensitivity falls off in the IR range. But in the near IR, it may not be zero sensitivity.
And the emission spectrum of LED's is not infinitely narrow. So not all of the photons coming off of an LED have the exact same wavelength.
The net effect of these two things is that when near infrared LED's are driven very hard, some photons will come off of them that are visible. To the camera, those LED's are like a super bright spotlight. But to your eye, they are just glowing modestly.
I have also seen cameras where the LED's were not visible at all. So there is some variation there.
I have never ever noticed that a remote control or any other IR-LED emits any red light. It might glow very, very dark, because a tiny little bit of the light is emitted at higher, visible wavelengths.
Maybe, you are a bit special and can see light deeper into the IR range, that would be interesting.
On the other side, you ask
why do most tv remote controls and security cameras appear to have a visible red colored LED lit when the infrared light is being emitted?
Why are there two LEDs, one red, ond IR?
That's just a feedback that the device is working. Remote controls have a visible LED on the top (mine is blue, by the way) and an IR LED pointing forward.
Security cameras indicate that they are on / are recording to the ones in front of the camera, there are even fake cameras with no more electronics than just this LED and its blinking circuit
And on advertisements like your picture, the IR LEDs are often "photoshoped" red.
In reality, camera sensors can see IR light, but it appears blueish white. This is the reason why for example cigarettes sometimes glow blue instead of red on photos. Today, there is a filter in front of the camera sensor, which prevents this. It typically does not block the IR of an LED which is very near to the visible spectrum, but some filters do.
Is that visible red light present as a convenience (introduced by grace of the component designer?) or as a by-product of emitting actual infrared light?
It isn't red light at all. It's infrared light which is perceived as red.
The human eye has three types of cones (color sensor cells): S-cones, M-cones, and L-cones. They're roughly equivalent to blue, green, and red color sensors. Here's an approximate set of response curves for the three types:
L-cones are mostly sensitive to red light in the 560-580 nm range, but will respond weakly up to 1000 nm, which is well into the infrared range. If an infrared emitter is bright enough -- which the LEDs on an infrared camera certainly are! -- it will activate L-cones, making it appear red.
cheap cameras from china or sourced from a big box store will usually use 840nm-850nm leds driven very hard to produce the illumination (mostly invisible spotlight) for their night vision.
The LED energy output as Light covers something 20nm+ or minus the listed wavelength(center wavelength).
especially in the dark most human eyes(depending on genes)have at least a weak response to something like 900nm. Tests by professionals done as double blind (the test methodolgy not vision lol) have shown some people can reliably detect a little past 1000nm. This doesnt mean it lights the room up. It means when someone in another room switched the ir light on in the test room the person was able to perceive enough of change in their vision that they answered "was it on" correctly over 50% of the time.
your eyes response/signal of brightness to brain trails off like a bell curve with high and low wavelengths and no two people have exactly the same vision (as some spectral charts posted would suggest).
. there is another thing at play as well. Something like a double bounce of the photons inside the eye allows them to trigger a stronger than otherwise would have happened activation. I tried to google and find the paper i happened across last week but i had no luck. maybe someone else can chime in.
practically speaking/applied: the higher you go nm wise the less visible it is, especially at the point its coming from.
If you want ir night vision cams that doesnt scream "here is my camera" or cause a passer by to notice the red orb 10ft off the ground from a distance, look for 940nm ir led illuminators. In pure dark and close to it you may see it but it wont be the obviousness of the 8xx nm or 7xx nm emitters.
most cameras have less sensitivity at 9xxnm but the systems do exist and regular cameras with out ir filters will usually see this much better than your eye. there are some youtube videos comparing 840 and 9xx emitters with average cameras.
its important to note that although the IR light sources are perceived as only glowing faintly, a strong ir source can damage the eyes. So if you buy high powered IR iluminators do not put it next to your eye ball and look at it! You will fry your eyes!
I noticed one commented talked about the price, but its really not that bad and has been following its own moores law so if you looked 6 months ago its worth looking again. At the other end of the spectrum in UV land leds that were a lab experiment 6 years ago and cost 200 till a few weeks ago just dropped to $12 bucks. LED tech is moving fast. anyone quoting a price with out having looked at it that month should refrain from stating it as fact.
But What Can You Do With an IR Camera?
Maybe the question should rather be “what can’t you do with an IR camera?” If you get a chance to play with one of these things, you will probably be amazed. We are so used to see the world primarily by reflected light. However, for most objects around us, they emit light in the infrared region. It’s like being in a world where everything is a light bulb.
When you use an IR camera, you literally will see the world in a different light. How about some examples of things you can see.
This is an IR selfie. Two things you can notice here. First, my glasses are dark. That’s not just because they are colder than me, but also because IR light doesn’t go through glass. You are seeing a reflection of the surroundings in my glasses. Now look at my nose. Notice that it’s a little bit darker? You will find that many humans have noses that are colder than the rest of their face. This is probably because I breathe through my nose which cools it off some.
This is an image of the Sun with a couple of tree branches in the foreground. You can’t get an accurate measurement of the temperature of the Sun because this particular camera only measures up to 270 C. What if you measured the sky? You would get a temperature reading of -40 C.
This is a composite image showing both a visible light image and an IR image. Notice that the part of the sidewalk in the shade is much cooler than the part in the sunlight. If you looked closely, you would see that part of the shadow near the sunny part is also warm. This is because the shadow just moved over that part and it hasn’t cooled off yet.
Here are two power adapters for two laptop computers. One computer is asleep and the other is in use. Notice the difference? The one one the right is much warmer. Even the cable is warmer. In fact, these power adapters can get quite warm with use.
Here is both an IR image and a visible light image of the same metallic object. The object has been sitting in the same room for quite some time such that it is at room temperature (just like everything else). You can still see the metal object in the IR because of the surface is reflecting IR light from other sources.
Many attic doors don’t have insulation on them. This means that the cold attic air (this is in the winter) cools off that part of the wood. Notice also the planks that go across for support and the spots where the bolts go through the wood (metal is usually a better conductor of heat than wood). If you look carefully, you can also see the studs through the normal part of the ceiling.
I love this one. It’s a ceiling fan. When you turn these things on, the electric motor gets warm. We always think of fans as making things cooler, but they don’t do that directly. Fans work by doing two things. They circulate the air and the moving air also helps with evaporation. The evaporation cools things down.
Infrared imaging better than touch at detecting defects in protective lead aprons
The fingertips are among the body's most sensitive areas and have the ability to detect very subtle changes to the surface of an object. For this reason, inspectors looking for defects in lead aprons that are used to shield patients' vital organs from radiation exposure have run their fingers over the aprons, relying on tactile inspection combined with visual inspection to find defects.
Infrared (IR) thermal imaging is a much better detective, with 50 percent of study participants picking out all holes intentionally drilled into a test apron compared with just 6 percent of participants who detected the same defects using the tactile method, according to research published online Nov. 8, 2017 in Journal of the American College of Radiology. In addition to being a more accurate way to detect subtle defects, the IR imaging technology also reduces ionizing radiation exposure for inspectors checking the protective power of lead aprons.
"When I researched how lead aprons are inspected, I learned that a combination of tactile and visual inspection is considered the gold standard. But many of the smallest holes can be missed this way," says Stanley Thomas Fricke, Nucl. Eng., Ph.D., radiation safety officer at Children's National Health System and study senior author. "Unlike the fingertips, infrared light can penetrate the lead apron's protective outer fabric and illuminate defects that are smaller than the defect size now used to reject a protective apron. This work challenges conventional wisdom and offers an inexpensive, readily available alternative."
According to the study team, a growing number of health care settings use radiation-emitting imaging, from the operating room to the dentist's office. Lead aprons and gonadal shields lower radiation doses experienced by health care staff and patients. In compliance with regulators, these protective devices are inspected regularly. A layer of lead inside keeps patients' exposure to ionizing radiation at the lowest detectable level. The aprons are covered with nylon or polyester fabric for the patients' comfort and for ease of cleaning.
"It is standard for health care institutions to use a tactile-visual approach to inspect radiation protective apparel," Fricke says. "While increasingly common, that inspection method can allow aprons with holes and tears to slip by undetected due to the large surface area that needs to be inspected, the outer fabric that encloses the protective apron and other factors."
Fricke recalled a news clip from years ago about an IR camera used to film swimmers at the pool that, like Superman's powerful vision, could see through pool-goers' clothing. The manufacturer quickly recalled the camera. But the IR technology is a perfect fit for inspectors looking for defects hidden under a lead apron's fabric cover.
To validate this inspection alternative, the team drilled a series of nine holes ranging from 2 mm to 35 mm in diameter into a "phantom" lead apron and enclosed it within fabric that typically covers the protective shielding. The research team stapled the phantom apron to a wooden frame and placed dry wall under the frame.
Two of 31 radiation workers picked out all nine holes by touch and recorded the holes and their locations on written questionnaires.
For the IR method, the team used an infrared light to illuminate the lead apron from behind and relied on an infrared imaging camera to record 10 seconds of video from which still images were exported. Ten of 20 radiation technologists, radiology nurses and medical doctors identified all nine holes using those color photographs and recorded their entries on a questionnaire. An additional 20 percent identified eight of nine intentional defects to the phantom apron.
In both the tactile and IR groups, all participants found the largest hole and correctly recorded its location.
"Using the tactile method for inspection, most staff who work regularly with radiation-emitting devices were able to identify defects that would cause a lead apron to be rejected, which is 11 mm holes for thyroid shields and 15 mm holes for aprons," Fricke says. "However, it is standard for these well-used aprons to develop smaller holes -- which, over time, become bigger holes. Here at Children's National, we care about every photon that touches a child."
In the next phase of the research, the team will explore infrared flash photography, cooling the apron material and the impact of high-resolution cameras with greater depth of field.
How Supermans X-Ray vision works
So how does Superman do it! He can see through buildings and clothing (he checks out Lois Lane's underwear in Superman 1 - more on this later). Many have attempted to answer this question of the ages yet few have explored this in as much depth as J.B. Pittenger who published a study in the journal Perception back in the stone ages (1983) entitled "On the plausibility of superman's x-ray vision"
But first, before we get into the meat of the paper, lets see what others around the InterWebs have said about Superman's amazing seeing through underwear powers.
What of the other powers? Superman's X-ray vision is not truly x-ray vision. What do you think -- Superman's eyes emit x-rays, which he uses to see with? That's not how x-rays work. They require a source that aims the x-rays toward the receiving end, whether it be eyes or photographic film. No, Superman's vision involves sensing energy fields that have hitherto been unidentified by human science. These energy fields surround and pervade all forms of matter, varying by density and vibratory rate, according to the density and composition of the object. In other words, Superman is seeing the subtle energy fields involved in the inter-transformation of energy into matter. His ability to distinguish those fields depends upon the "signal-to-noise ratio" between any object he is sensing and any intervening objects. Lead, being dense, has a field so dense that less-dense fields behind it are hard to distinguish. Gold has the same effect. But since people do not commonly use gold as shielding, it has not been written about. So people think, "Lead blocks x-rays lead blocks Superman's x-ray vision."
Ok so we need energy fields unidentified by human science. I'll go out on a limb and guess that the scientists of Superman's home planet have discovered this energy field but didn't include it in that weird crystal house/computer/whatever thing.
Answerbag.com has a number of great speculations as well:
Just like rods and cones in the human eye, Superman possibly has x-ray detecting crystals like Silicon or Cadmium-Telluride in his eye that detect x-rays passing through a special lens called Kumakhov polycapillary focusing x-ray lens implanted in his eye.
The other possibility could be that x-rays get converted to normal light by a film of x-ray fluorescent material and then it is the normal work of the rods and cones like in case of the human eye.
Superman's eyes actually PROJECT X-rays depending on how much is absorbed or reflected back at him allows him to see through solif objects.
Back in the day, Superman's "heat vision" was actually just a creative use of his X-ray vision -- he would project enough X-Rays to actually melt or destroy an object.
Of course we can't forget to see what wikipedia says about this understudied phenomenon:
The best known figures with "x-ray vision" are the fictional superhero Superman who once had a heat producing function before that power was separated as heat vision, and the protagonist of the 1963 film X (aka X: The Man with the X-Ray Eyes).
At least in the first Superman movie, Superman's X-ray vision could see through female character Lois Lane's clothing to see the color of her underwear. This implies it had nothing to do with actual X-rays, since color is a matter of spectral properties at optical frequencies.
In the movie Superman Returns, Superman uses the X-ray vision to see into the interior of Lois Lane's body in order to check for internal injuries.
Now that we have that all out of the way lets get onto some 'real' science.
Let's start with the basic human visual system. Light propagates through the air, being partially reflected by the objects that it encounters. This light reaches our eyes and is translated into chemical responses by the rods and cones in our retinas, and then travels through various sets of neurons where it is processed in different ways, giving rise to the experience of vision. So basically we need an information source and a processor. In the case of human vision this is light and the brain. In the case of superman this becomes more complicated.
There are three basic conditions that a superman x-ray system must meet to be plausible.
The rays must be such that all objects but lead are entirely or almost entirely transparent to them. Lead is always entirely opaque to the rays.
The rays and processor must result in Superman perceiving the same colors as would an Earthling viewing the scene in ordinary sunlight.
The rays must permit Superman, but not Earthling standing in line with the reflected rays, to see through normally opaque surfaces.
These conditions lead to two clear solutions.
The first solution:
Rays are emitted by Superman's eyes which penetrate objects and then return to his eyes.
- x-rays penetrate lead (perhaps superman uses a different energy wave?)
- The 'stopping problem.' Once the rays penetrate something why do they not continue on through the next object and the next and the next. If the rays do somehow stop/are lessened after penetrating the object how do they then get back to Superman in order for him to process the signal?
- To generate color the rays emitted by Superman's eyes have to be multifrequency so that they bounce off/are absorbed by different colors in the environment.
The second solution:
Two types of rays are emitted by superman, one to make objects transparent and the other to 'see'
- There is no evidence that a ray of this type could exist.
- The 'stopping problem' is still in effect.
- The transparency ray violates the exclusivity condition. If a ray makes things invisible then all the normal humans could see through walls as well (assuming superman shot his rays out for them). Then again if the rays made objects only transparent to a certain spatial frequency not available to human perception, lets say ultraviolet, or infrared. Then the transparency ray would not have to violate the exclusivity condition. But then color processing gets whacked.
The biggest problem of all for any theory of x-ray vision is as J.B. Pittenger says,
One fundamental problem with the plausibility of Supeman's x-ray vision lies in its need to make objects serve, at different times, as both media and things-to-be-seen. This places rather strong requirements on the nature of the rays or on the device that processes the rays.
So why did J.B. go to all this trouble of figuring out all the problems with Superman's vision?
The contrast between human vision and Superman's x-ray vision can be useful in helping students understand the importance to vision of the physical nature of light and its interaction with the air and objects in the environment.
Human vision has evolved to make use of several physical properties of 'visible' light: over short distances it passes largely unchanged through air, thus making air nearly invisible' it is reflected by most surfaces in the environment, thus allowing them to be visible' and the reflection is only partial, thus structuring the light so as to provide information to the perceiver.
If you're interested in reading the article you'll have to head over to your university library since the article is not yet available online. If you do manage to get a digital copy I would love a copy!
The 5 Senses, or Maybe 7, Probably 9, Perhaps 11
When we talk about human senses, we traditionally assume that there are exactly five senses — sight, hearing, taste, smell, and touch. This way of thinking about the senses is quite ancient, dating back more than 2000 years. On the assumption that this model is factually correct, we teach “the five senses” to our children from a very early age. This model is so ingrained in our culture that any additional method of perception, whether real or imagined, is usually called “a sixth sense”.
However, there are serious weaknesses in our traditional model of five senses. By any objective measure, humans actually possess more than five senses. Of all the basic scientific models that we traditionally teach our children, few deviate from reality as blatantly as our model of the five senses. That’s not to say that the model is completely worthless. Because the model is so simple, it is easily learned, even by very young children. Therefore it can serve as a helpful framework for early learning. But for older children and adults, the model seriously constrains our thinking about the senses.
A principal characteristic of the five-sense model — and one reason why it is so appealing — is that each of the senses is paired with a unique and highly visible part of the body — eyes, ears, mouth, nose, and skin. In fact, this way of thinking is actually a model of our five most obvious sense organs, rather than a proper model of the senses, and this is what makes it ideal for teaching to preschoolers — in conjunction with learning to identify and name the major parts of the head and body.
Unfortunately, there is no universal agreement as to how many senses humans actually have. The main difficulty is that the count can vary considerably depending upon how you define the word “sense”. Another problem is that as you add more senses to the list, the boundaries between the senses become more blurry, and therefore the count depends upon where you decide to draw the boundaries. Another factor is that some animals possess senses that humans do not — such as the ability to detect magnetic fields. (A thorough discussion of the senses should probably take into account all animals, not just humans.) For all of these reasons, experts disagree as to how many senses there actually are. Without a general consensus as to what model should replace the 5-sense model, the old model retains its strong popularity. That said, a 9-sense model (discussed later in this essay) is probably the strongest contender for replacing it.
One key characteristic of the 5-sense model is that all of the senses are related to detecting phenomena that originate outside of our bodies. In other words, the five traditional sense organs are all tools for investigating the world around us. We see, hear, smell, taste, and touch the things that surround us. If we limit our count of senses to those that detect external phenomena, then our count will never get very long — although it will indeed be more than five. One helpful approach is to itemize the categories of detectable phenomena that originate outside the body:
1) Light (electromagnetic radiation)
Our eyes detect light, or more precisely, they detect a limited range of frequencies in the spectrum of electromagnetic radiation. But equally important, the lens in each eye focuses images on the retina, which allows us to deduce the precise shapes and locations of objects that reflect or emit light. The four kinds of photoreceptors in our eyes (rods and three kinds of cones) allow us to distinguish between frequencies of light, which the brain perceives as color. The fact that we have two eyes with overlapping fields of vision provides us with the ability to judge distances.
Several kinds of animals, including birds and bees, have the ability to see frequencies of light into the ultraviolet, which humans cannot see. Certain kinds of snakes can detect infrared light using “pit organs” in their heads — allowing them to detect the body heat of their prey. However, the sensors in these pit organs work by detecting subtle temperature changes in the tissue lining the pits, rather than directly detecting the photons of infrared light.
As with all of the senses, detecting something with the sense organs is only the first step. The information then needs to be relayed to the brain via nerve pathways, and the brain assembles and interprets the information to produce our perception of the sense. It is our brain that sees patterns, colors, and movement in the data sent from the eyes. It is our brain — not our eyes — that picks out faces in a crowd or in a photograph.
2) Sound waves (vibrations)
Our ears detect sound waves in the air, within a certain range of frequencies. Although we cannot hear sounds whose frequencies lie outside that range, we are very good at distinguishing between the audible frequencies, and at distinguishing other characteristics of sounds. Because we have two ears, we have a sense of what direction a sound is coming from. These abilities not only help us to detect what is happening in the world around us, but also allow us to communicate with other humans through speech.
Some animals are skilled at detecting vibrations in other media besides air. Animals that live in water will, of course, detect sounds waves in water. Other animals can detect vibrations in more solid objects. For example, an insect trapped in a spider web sets up vibrations that not only alert the spider, but also tell the spider certain details about what has been caught. Many kinds of animals, including elephants, can detect and interpret vibrations coming through the ground.
Some animals, such as bats, have developed the ability to “see” their surroundings through echo-location. This means that they can determine the locations and shapes of nearby objects by detecting sounds waves bouncing off of them — somewhat analogous to our own ability to assemble a mental image of the world around us by observing reflected light.
3) Odors & flavors (chemical molecules)
Our senses of smell and taste are both based on detecting molecules of various substances that come in contact with our bodies. In the case of smell, we use the nose to detect airborne molecules of materials — in other words, substances that have evaporated into the air. In the case of taste, we detect five distinct categories of molecules that are present in our food — or in anything else we put into our mouths.
Our perception of taste is due to input from both of these senses. The taste buds on the tongue detect molecules that are sweet, sour, salty, bitter, and savory — but all of the other flavors we detect in our food are due to the molecules that reach the nose. The mouth and nose are connected to each other by passageways at the back of the throat. As we chew our food, we release volatile molecules that waft up through this connection into the nose. In contrast to the five distinct types of taste buds, the nose includes around 400 distinct olfactory receptors. These 400 sensations result in millions of possible combinations, allowing us to detect up to millions of distinct odors.
We all know that are many other animals besides humans that can smell with their noses and taste with their mouths. The surprise is that certain creatures can taste or smell with other sense organs. Some insects can detect airborne molecules with their antennae — meaning that they use their antennae to smell. Some insects can detect molecules in materials that they touch with their feet, meaning that they have a sense of taste in their feet.
4) Direct contact (touch or pressure)
We have several distinct kinds of receptors in our skin, one type of which specializes in detecting touch or pressure. This allows us to determine when our body has come into contact with an external object. Although these receptors are in all parts of our skin, the density of the receptors varies considerably. In other words, in some parts of our skin — such as our hands — a large quantity of receptors are packed into a small area, giving those parts of our skin a much better ability to gather information and to discern shapes, sizes, and textures.
Our hands have a second advantage compared to other parts of our skin. The flexibility of our hands allows us to explore surfaces in much more detail. With our eyes closed, we can easily determine the shape and size of a small object just by touching it with our hands. This is very hard to do with any other part of our skin. Part of the trick is that we don’t have to feel the entire surface at once. We can spend several seconds feeling different parts of the surface, and then our brain puts the information together. So in our traditional 5-sense model, we could have associated the sense of touch with hands, rather than skin — there are good arguments both ways.
The sense of touch can be extended over some distance by the use of a long, slender appendage, such as the whiskers of a cat or the feelers of an insect or crustacean. In the case of a cat’s whiskers, the touch receptors are located in the skin surrounding the base of the whisker. But in the case of a feeler (an antenna used for touching), the touch receptors are actually located in the feeler. In many cases, the same antennae contain other kinds of sense receptors, allowing for smell, taste, hearing or other capabilities.
5) Heat & cold (temperature)
Another type of receptor in our skin is one that detects changes in temperature — the hot and cold receptors. Although this type of receptor provides us with information about the world around us, it does so indirectly — because these receptors do not directly sense the outside world. Instead, they detect temperature changes in the skin. The skin, in turn, is heated or cooled by contact with the air or other objects, and also by exchange of radiant energy (primarily infrared radiation). The upshot is that when we feel the heat of a fire, it is not by directly detecting the radiant energy striking the skin, but by detecting the resulting change in the temperature of the skin. (The main difference between our temperature sense and the pit organs in a snake — other than the degree of sensitivity — is that the pit organs allow the snake to more accurately pinpoint the direction from which the heat originates.)
In the 5-sense model, the sense of hot and cold is completely ignored, or else it is bundled into the sense of touch, even though it is a very different sense. After all, you don’t have to touch the sun in order to feel its heat!
6) Gravity & acceleration
Our ability to detect gravity and acceleration is usually called our “sense of balance”. For this we rely upon the semi-circular canals in our inner ears. Even though gravity is a phenomenon that originates outside of our bodies, the only thing we learn from detecting it is which way is “up”, which allows us to maintain our bodies in an upright position as we stand or walk — even when our eyes are closed.
This is a very real sense, with an easily identified sense organ. And yet this sense is not included in our traditional 5-sense model — in part because the sense organ is not visible on the outside of the body, and in part because the 5-sense model predates our understanding of the role of the semi-circular canals.
Although mammals rely on their semi-circular canals to provide a sense of balance, many invertebrates use a very different organ called a statocyst. In either case, the purpose is to detect gravity in order to know which way is up, so that the body can be properly oriented for safety or locomotion.
7) Magnetic fields
Many kinds of animals are able to detect magnetic fields, even though humans cannot. This gives them an ability to detect the earth’s magnetic field, which can result in a powerful sense of direction (especially north and south). The best-known examples of this phenomenon are birds that fly long distances for their spring and autumn migrations.
A sense organ that detects magnetic fields can be compared to a compass. However, the individual receptors can be extremely small, and could theoretically be anywhere in the body, even in the brain itself. The upshot is that while we have excellent evidence that many kinds of animals have a magnetic sense of direction, in most cases we are not sure exactly where the magnetic receptors are located.
8) Electrical fields and static charge
Some aquatic animals have the ability to sense changes in the electric field in their immediate vicinity. The best-known examples are sharks and rays, but certain other sea animals also have this ability, including dolphins. This sense can be used to identify prey and other nearby objects, which can be quite useful when the water is murky or dark, or when the prey is hiding in the mud or silt on the seafloor.
For animals that live surrounded by air instead of water, the direct sensing of electric fields is not possible. However, some animals — even humans — can detect static charges through indirect means. In the case of humans, a nearby static charge will cause the hair on our arms to stand up, which we can easily feel. Of course, we can also feel gusts of wind using the hairs on our arms. The receptors surrounding the hairs cannot distinguish between these two phenomena — but our brains, upon receiving the information from many hair follicles over a period of several seconds, can easily distinguish the two. This ability should be categorized as an extension to our sense of touch — like the whiskers of a cat — rather than a separate sense. In contrast, sharks really do have an additional sense for directly detecting electrical fields.
Let’s pause here and take stock of our list so far. We have identified eight detectable external phenomena — nine if you separate airborne molecules (smell) from non-airborne (taste) — and every one of these phenomena corresponds to a specific sense in various animals. Humans have seven of these nine senses, lacking only the ability to detect magnetic fields and electric fields. Therefore, any new model of the senses should list at least 7 senses (if we consider only humans) or 9 senses (if we consider all animals). If we define the word “sense” to mean only the detection of external phenomena, then our count is finished: There are 7 human senses, and 9 principal senses across the animal kingdom.
However, our bodies have additional sense receptors beyond the ones we have catalogued so far. Instead of telling us about phenomena that are external to our bodies, these additional sensors provide information about our own bodies. The most obvious example is our sense of pain, triggered by pain receptors located not just in our skin, but also deeper within our bodies. (Broken bones and other internal injuries can be quite painful.) But we are also aware of other internal phenomena, such as being hungry or thirsty, or feeling too full from having eaten too much, or having a full bladder and needing to go to the bathroom. All of these require some sort of sensor within the body in order to detect the issue. The sensors, in turn, send messages to the brain via the nervous system. Therefore we could legitimately refer to a sense of hunger, a sense of thirst, or a sense of being full. In fact, scientists have catalogued a long list of such internal senses. If we were to include all of these senses in our list, then we could easily reach 20 or more distinct human senses.
A subtle but important sense that has gotten a lot of press recently is called proprioception. This is the sense of knowing how the various parts of your body are positioned, without relying on sight or touch. A demonstration of this sense is to close your eyes, and then to reach up and touch your nose. Most people can do this quite easily. Several recent articles in the popular press have stated that because of this newly recognized sense, we now know that humans actually have six senses instead of five. This is obviously incorrect, because if we were to agree on a new model of the senses to teach in our schools, then the senses of balance, temperature, and pain are all stronger candidates for inclusion than proprioception. That said, proprioception is certainly a valid candidate, and ought to be considered.
A related issue is what terminology to use when teaching the senses to older children. We know to use very simple terminology when teaching preschoolers, but as kids grow older, we have a tendency to introduce more complex terminology — some of which is rather pointless. For example, there is little value in teaching children to say “audioception” in place of “sense of hearing”, or “gustation” instead of “sense of taste”. By the same token, the formal term “proprioception” simply gets in the way of teaching kids about the corresponding sense. It would be more appropriate to use an everyday term that conveys the underlying concept in an easily understood manner.
So what really is the underlying concept for proprioception, expressed in a single word? Some people explain proprioception as knowing the location of one’s limbs — but a “sense of location” would be a highly misleading phrase. Furthermore, the receptors in our muscles, tendons, and joints do not actually sense the location of our limbs in space. Instead, these receptors detect the degree to which the muscles are flexed and the angles of the joints, which allows the brain to deduce the position of the body and the position of each of the limbs. Therefore the best term for this sense, at least for teaching children, is “a sense of position”.
So imagine if we were all to agree on a new model of the senses for teaching in the upper primary grades. How many senses would we include in this model, and what would those senses be? In contrast to “The Five Traditional Human Senses”, the strongest alternative model is “The Nine Primary Human Senses”, consisting of:
- position (an easier word and concept than “proprioception”)
In conjunction with this model, it could be useful to teach our children “The Eleven Primary Animal Senses”, which consists of the above nine human senses, along with the magnetic sense of direction and the perception of electrical fields in salt water.
Although there is no consensus on a census of the senses, the 9-sense model is slowly gaining ground as an excellent model for educational purposes, and it is certainly a strong contender for inclusion in the curriculum of the upper primary grades. That said, there is also a reasonable 7-sense model, and a reasonable 11-sense model. A model with more than 20 senses is certainly possible, even though not particularly suitable for teaching in primary school. We should always remember that while the science models we teach our kids are helpful tools for learning, these models are usually a simplified approximation of reality, rather than a perfect and unassailable reflection of reality. Therefore we should not confuse our models with absolute truth.
If you’re struggling with eyestrain on the computer, try strong blue-light filtering glasses that are tinted slightly yellow. Research shows it may help a tiny bit. Or you could try shifting your work screen a bit more yellow using F.Lux or Iris.
As for the nighttime blue blockers—there are tons of people struggling out there with sleep. Their kids too. Wearing blue-blockers at night time is an inexpensive and easy solution to transform your sleep into a much better one.
Ideally, it’d be healthier to go back to no electrical light and to use candles and whatnot, but it’s not realistic for most people. And so, blue blockers can be a cheap option of reclaiming some of your health back. For those on the cusp of trying medication, why not order a cheap wrap-around pair, try it out for 30 days and see how you do?
And as we saw in one study, even for those who are already quite healthy, like someone who lifts weights or does endurance training, who already have a good sleep were able to improve their sleep and recovery.
If you like this kind of stuff and you’re a guy who wants to lift weights, you might like our True Gains program.
Otherwise, what did you think? Leave a comment below and I’ll respond.