Where are action potentials initially created?

Where are action potentials initially created?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Is the axon hillock still the place where one thinks and talks of that action potentials are initially created? (I've heard this place moved into the direction of the axon initial segment.)

If one doesn't want to commit oneself whether it's the axon hillock or the initial segment where action potentials are initially created - because it's somewhere inbetween or at both of them, or it depends on the type of neuron, or we don't know, or we don't care:

How does one talk about that place unequivocally?

Is there an abstract (non-anatomical) terminus technicus for "the place where action potentials are initially created"?

Action potentials are initiated where there is a high density of voltage-gated $Na^+$ channels and that may vary from cell to cell, as you guessed. For an illustrative example see the figure below, reproduced with permission from Kole et al. (2008).

Density of voltage-gated $Na^+$ channels and $Na^+$ influx along the axon of a cortical pyramidal neuron.
Reproduced with permission © Nature Publishing Group

As you can see, in this cell type voltage-gated $Na^+$ channels, labelled in green in the left-most image, are predominantly expressed away from the axon hillock. We can also observe that the expression of voltage-gated $Na^+$ channels correlates with the degree of $Na^+$ influx in a monotonic relationship (right-most panel).

In smaller cells action potential initiation can happen closer to the axon hillock and in cells of some invertebrates even in multiple locations (Kole and Stuart 2012). What may be the advantage of using the axon initial segment (AIS) as the sole site of initiation of action potentials (APs)? Here is a direct quote from Kole and Stuart (2012):

[The AIS] has a small local capacitance ($C$) and therefore requires less inward current ($I$), that is, a smaller number of $Na^+$ channels per unit area, to generate APs compared to larger structures, such as the soma or proximal dendrites. Hence, the AIS is also an energetically favorable site for AP initiation. Furthermore, the small capacitance of the AIS favors rapid changes in membrane potential, as occurs during the upstroke of the AP ($dV/dt = I/C$). Finally, it is worth noting that having a single site of AP generation provides neurons a single locus where inhibition can gate AP initiation. [… ] Initiation of APs further from the soma, taking advantage of the electrical isolation of this region, is a strategy used in some neurons to increase their capacity to discriminate the arrival time of different synaptic inputs.

M. H. P. Kole et al., Action potential generation requires a high sodium channel density in the axon initial segment. Nat. Neurosci. 11, 178-186 (2008). doi: 10.1038/nn2040
M. H. P. Kole, G. J. Stuart, Signal Processing in the Axon Initial Segment. Neuron. 73, 235-247 (2012). doi: 10.1016/j.neuron.2012.01.007

Resting potential

Our editors will review what you’ve submitted and determine whether to revise the article.

Resting potential, the imbalance of electrical charge that exists between the interior of electrically excitable neurons (nerve cells) and their surroundings. The resting potential of electrically excitable cells lies in the range of −60 to −95 millivolts (1 millivolt = 0.001 volt), with the inside of the cell negatively charged. If the inside of a cell becomes more electronegative (i.e., if the potential is made greater than the resting potential), the membrane or the cell is said to be hyperpolarized. If the inside of the cell becomes less negative (i.e., the potential decreases below the resting potential), the process is called depolarization.

During the transmission of nerve impulses, the brief depolarization that occurs when the inside of the nerve cell fibre becomes positively charged is called the action potential. This brief alteration of polarization, thought to be caused by the shifting of positively charged sodium ions from the outside to the inside of the cell, results in the transmission of nerve impulses. After depolarization, the cell membrane becomes relatively permeable to positively charged potassium ions, which diffuse outward from the inside of the cell, where they normally occur in rather high concentration. The cell then resumes the negatively charged condition characteristic of the resting potential.

This article was most recently revised and updated by Kara Rogers, Senior Editor.


When the graded excitatory postsynaptic potentials (EPSPs) depolarize the soma to spike threshold at the axon hillock, first, the axon experiences a propagating impulse through the electrical properties of its voltage-gated sodium and voltage-gated potassium channels. An action potential occurs in the axon first as research illustrates that sodium channels at the dendrites exhibit a higher threshold than those on the membrane of the axon (Rapp et al., 1996). Moreover, the voltage-gated sodium channels on the dendritic membranes having a higher threshold helps prevent them triggering an action potential from synaptic input. Instead, only when the soma depolarizes enough from accumulating graded potentials and firing an axonal action potential will these channels be activated to propagate a signal traveling backwards (Rapp et al. 1996). Generally, EPSPs from synaptic activation are not large enough to activate the dendritic voltage-gated calcium channels (usually on the order of a couple milliamperes each) so backpropagation is typically believed to happen only when the cell is activated to fire an action potential. These sodium channels on the dendrites are abundant in certain types of neurons, especially mitral and pyramidal cells, and quickly inactivate. Initially, it was thought that an action potential could only travel down the axon in one direction towards the axon terminal where it ultimately signaled the release of neurotransmitters. However, recent research has provided evidence for the existence of backwards propagating action potentials (Staley 2004).

To elaborate, neural backpropagation can occur in one of two ways. First, during the initiation of an axonal action potential, the cell body, or soma, can become depolarized as well. This depolarization can spread through the cell body towards the dendritic tree where there are voltage-gated sodium channels. The depolarization of these voltage-gated sodium channels can then result in the propagation of a dendritic action potential. Such backpropagation is sometimes referred to as an echo of the forward propagating action potential (Staley 2004). It has also been shown that an action potential initiated in the axon can create a retrograde signal that travels in the opposite direction (Hausser 2000). This impulse travels up the axon eventually causing the cell body to become depolarized, thus triggering the dendritic voltage-gated calcium channels. As described in the first process, the triggering of dendritic voltage-gated calcium channels leads to the propagation of a dendritic action potential.

It is important to note that the strength of backpropagating action potentials varies greatly between different neuronal types (Hausser 2000). Some types of neuronal cells show little to no decrease in the amplitude of action potentials as they invade and travel through the dendritic tree while other neuronal cell types, such as cerebellar Purkinje neurons, exhibit very little action potential backpropagation (Stuart 1997). Additionally, there are other neuronal cell types that manifest varying degrees of amplitude decrement during backpropagation. It is thought that this is due to the fact that each neuronal cell type contains varying numbers of the voltage-gated channels required to propagate a dendritic action potential.

Generally, synaptic signals that are received by the dendrite are combined in the soma in order to generate an action potential that is then transmitted down the axon toward the next synaptic contact. Thus, the backpropagation of action potentials poses a threat to initiate an uncontrolled positive feedback loop between the soma and the dendrites. For example, as an action potential was triggered, its dendritic echo could enter the dendrite and potentially trigger a second action potential. If left unchecked, an endless cycle of action potentials triggered by their own echo would be created. In order to prevent such a cycle, most neurons have a relatively high density of A-type K+ channels.

A-type K+ channels belong to the superfamily of voltage-gated ion channels and are transmembrane channels that help maintain the cell's membrane potential (Cai 2007). Typically, they play a crucial role in returning the cell to its resting membrane following an action potential by allowing an inhibitory current of K+ ions to quickly flow out of the neuron. The presence of these channels in such high density in the dendrites explains their inability to initiate an action potential, even during synaptic input. Additionally, the presence of these channels provides a mechanism by which the neuron can suppress and regulate the backpropagation of action potentials through the dendrite (Vetter 2000). Pharmacological antagonists of these channels promoted the frequency of backpropagating action potentials which demonstrates their importance in keeping the cell from excessive firing (Waters et al., 2004). Results have indicated a linear increase in the density of A-type channels with increasing distance into the dendrite away from the soma. The increase in the density of A-type channels results in a dampening of the backpropagating action potential as it travels into the dendrite. Essentially, inhibition occurs because the A-type channels facilitate the outflow of K+ ions in order to maintain the membrane potential below threshold levels (Cai 2007). Such inhibition limits EPSP and protects the neuron from entering a never-ending positive-positive feedback loop between the soma and the dendrites.

Since the 1950s, evidence has existed that neurons in the central nervous system generate an action potential, or voltage spike, that travels both through the axon to signal the next neuron and backpropagates through the dendrites sending a retrograde signal to its presynaptic signaling neurons. This current decays significantly with travel length along the dendrites, so effects are predicted to be more significant for neurons whose synapses are near the postsynaptic cell body, with magnitude depending mainly on sodium-channel density in the dendrite. It is also dependent on the shape of the dendritic tree and, more importantly, on the rate of signal currents to the neuron. On average, a backpropagating spike loses about half its voltage after traveling nearly 500 micrometres.

Backpropagation occurs actively in the neocortex, hippocampus, substantia nigra, and spinal cord, while in the cerebellum it occurs relatively passively. This is consistent with observations that synaptic plasticity is much more apparent in areas like the hippocampus, which controls spatial memory, than the cerebellum, which controls more unconscious and vegetative functions.

The backpropagating current also causes a voltage change that increases the concentration of Ca 2+ in the dendrites, an event which coincides with certain models of synaptic plasticity. This change also affects future integration of signals, leading to at least a short-term response difference between the presynaptic signals and the postsynaptic spike. [1]

While many questions have yet to be answered in regards to neural backpropagation, there exists a number of hypotheses regarding its function. Some proposed function include involvement in synaptic plasticity, involvement in dendrodendritic inhibition, boosting synaptic responses, resetting membrane potential, retrograde actions at synapses and conditional axonal output. Backpropagation is believed to help form LTP (long term potentiation) and Hebbian plasticity at hippocampal synapses. Since artificial LTP induction, using microelectrode stimulation, voltage clamp, etc. requires the postsynaptic cell to be slightly depolarized when EPSPs are elicited, backpropagation can serve as the means of depolarization of the postsynaptic cell.

Backpropagating action potentials can induce Long-term potentiation by behaving as a signal that informs the presynaptic cell that the postsynaptic cell has fired. Moreover, Spike-Time Dependent Plasticity is known as the narrow time frame for which coincidental firing of both the pre and post synaptic neurons will induce plasticity. Neural backpropagation occurs in this window to interact with NMDA receptors at the apical dendrites by assisting in the removal of voltage sensitive Mg2+ block (Waters et al., 2004). This process permits the large influx of calcium which provokes a cascade of events to cause potentiation.

Current literature also suggests that backpropagating action potentials are also responsible for the release of retrograde neurotransmitters and trophic factors which contribute to the short-term and long-term efficacy between two neurons. Since the backpropagating action potentials essentially exhibit a copy of the neurons axonal firing pattern, they help establish a synchrony between the pre and post synaptic neurons (Waters et al., 2004).

Importantly, backpropagating action potentials are necessary for the release of Brain-Derived Neurotrophic Factor (BDNF). BDNF is an essential component for inducing synaptic plasticity and development (Kuczewski N., Porcher C., Ferrand N., 2008). Moreover, backpropagating action potentials have been shown to induce BDNF-dependent phosphorylation of cyclic AMP response element-binding protein (CREB) which is known to be a major component in synaptic plasticity and memory formation (Kuczewski N., Porcher C., Lessmann V., et al. 2008).

While a backpropagating action potential can presumably cause changes in the weight of the presynaptic connections, there is no simple mechanism for an error signal to propagate through multiple layers of neurons, as in the computer backpropagation algorithm. However, simple linear topologies have shown that effective computation is possible through signal backpropagation in this biological sense. [2]

Transmission at the synapse

Once an action potential has been generated at the axon hillock, it is conducted along the length of the axon until it reaches the terminals, the fingerlike extensions of the neuron that are next to other neurons and muscle cells (see the section The nerve cell: The neuron). At this point there exist two methods for transmitting the action potential from one cell to the other. In electrical transmission, the ionic current flows directly through channels that couple the cells. In chemical transmission, a chemical substance called the neurotransmitter passes from one cell to the other, stimulating the second cell to generate its own action potential.

Why we need computational models in biology

Many researchers begin the scientific process by making observations of the natural world and collecting data. They then try to extract patterns from these observations and data using statistical analysis. However, defining statistical correlations alone does not result in understanding. Instead, a theory is needed. A scientific theory aims to provide a unifying framework for a large class of empirical data to help researchers make testable predictions.

Although theory is celebrated in the physical sciences, it is questioned in the life sciences. Theory in biology was initially obscure and often relegated to highly technical journals. However, with the advent of big data, theory has now come to the forefront in biology. In this post, I will discuss the role of theory in biology, provide examples of important models, and conclude with an in-person interview with prominent theorist Larry Abbott at Columbia University.

Why Physicists Like Models, and Why Biologists Should

In biology, few quantitative theories are predictive, leading some scientists to distrust theoretical studies. In physics, the opposite is true. The difference lies in the nature of the systems being studied: while physics derives beauty from simple reductionist elegance, biology finds beauty in complexity and richness. For this reason, simple mathematical theories of biology are often incorrect. Many experimentalists also see simulated data as too far removed from biology. Some are frustrated by the dense language of computational papers and inaccessible math used to explain straightforward biological principles.

I believe that computational models can complement experimental data to provide superior biological understanding and treatment of diseases. A good computational model inspires new experiments and provides new insights. While models cannot prove what mechanisms are at work, they can suggest what variables are most important to investigate in an experiment. Daniel Hillis compares the utility of theoretical models to that of model organisms: "models cannot prove anything conclusive about biological evolution anymore than the nervous system of a nematode can prove anything about the nervous system of a mammal." In other words, both computational models and simpler nervous systems serve as instructive examples.

Learning from Theory: The Discovery of DNA Structure

Theory played an important role in the discovery of DNA structure. While Francis Crick had a background in mathematics and physics, James Watson had expertise in the molecular biology of phage, the viruses that infect bacteria. Working together, these scientists used model building to reveal the famed double helix. X-ray crystallographic data obtained by Rosalind Franklin and Maurice Wilkins at King's College London were also crucial to the discovery. In particular, Franklin's photo of the B-form of DNA pointed to the helical structure of DNA. "The instant I saw the picture, my mouth fell open and my heart began to race," wrote Watson. Together, Watson and Crick built a now famous model of DNA using metal plates for nucleotides and rods for the bonds between them. The true beauty of this model is that structure implies function, and this discovery facilitated a new era in biological research.

Computational Models in Neuroscience

Computational models have become very popular in neuroscience, where the Hodgkin-Huxley model of action potentials is arguably the most important theory. This model is a set of nonlinear differential equations that approximates the electrical patterns of excitable cells such as neurons and cardiac myocytes. The Hodgkin-Huxley model has inspired contemporary neuroscientists like Professor Abbott to model firing patterns of cortical brain cells. In a PLOS Computational Biology paper, Abbott addresses the relationship between tuning curves and neural circuits. A tuning curve is a graph of auditory threshold intensity at different frequencies for a single neuron. As neurons have distinct tuning curves and are thought to arise from structured synaptic connectivity, a theoretical model can predict the order of synaptic inputs. Knowledge of the order of synaptic inputs, such as the identity, strength, and location of each synapse, is critical for understanding how neurons compute.

Professor Abbott on Creating Models

In a PLOS Computational Biology paper, Lalazar et al create a theoretical model for arm posture control in a primate monkey model. The results of this neural network model were compared to biological data obtained from the primary motor cortex area. Surprisingly, this study found that synaptic connectivity in this model is completely random.

Professor Abbott was a theoretical particle physicist at Brandeis University for ten years before he switched to neuroscience. Today, he is a leader in theoretical neuroscience and a co-author on the first comprehensive textbook on theoretical neuroscience. He was inspired to transition into biology after a visit to the laboratory of Professor Eve Marder at Brandeis University. Mesmerized by the sound of spikes of electrical activity in neural tissue, Abbott trained with Marder for one year. They subsequently published together for over a decade.

At Columbia University, Abbott founded the Center for Theoretical Neuroscience. He collaborates extensively with experimental biologists including Eric Kandel. Abbott uses computer simulations and analytical techniques to model and analyze neural circuits that drive behavior. "I first try to take all the important features of a neural circuit and then see what they imply. I then see if they agree with what I believe are the important experiments," Abbott says during our interview. In other words, Abbott first determines the neurons that participate in a neural circuit. He then generates a simulation to predict how each neuron integrates input signals from synapses.

A good model will recapitulate biology and lead to novel understanding. However, a computational model in biology need not be predictive to be of use. "You can have models of a well-understood phenomenon if you describe it in a new way. These models will lead to greater understanding," Abbott says. He states that it is critical for a model to go beyond our simple intuition.

Learning Computational Skills

Abbott argues it is critical for early-career biologists to learn computational skills. "Knowing skills outside your field before you choose to specialize is really good. It's very hard to do it in the reverse order once you've picked a lab. Statistics and math are important skills today."

I can attest to the benefits of learning computational skills as I first trained in a mechanical engineering lab. When I entered the neuroscience field, I already understood the engineering behind electrophysiology rigs and the math behind theoretical models. In my current research, I merge experiments and theory using the simple nervous system of a fly, and this work has shown me the value of combining these two approaches. To move forward, I believe theoretical biologists and empirical biologists must make their work more accessible and valuable to each other. Professor Abbott is a testament to this type of collaboration and has succeeded in attracting mainstream attention to computational theories in neuroscience. Theory can provide novel insights and change the way experimental biologists understand their subject.

Hillis, W. D. (1993). Why physicists like models and why biologists should.Current Biology, 3(2), 79-81.

Shou, W., Bergstrom, C. T., Chakraborty, A. K., & Skinner, F. K. (2015). Theory, models and biology. Elife, 4, e07158.

J. D. Watson, The double helix. London: Penguin, 1999.

Watson, J. D. (1981). The DNA story: A documentary history of gene cloning. In WH Freeman and Co.

Purves, D., Augustine, G. J., Fitzpatrick, D., Katz, L. C., LaMantia, A. S., McNamara, J. O., & Williams, S. M. (2001). The auditory system.

Lalazar, H., Abbott, L. F., & Vaadia, E. (2016). Tuning Curves for Arm Posture Control in Motor Cortex Are Consistent with Random Connectivity. PLoS Comput Biol, 12(5), e1004910.

Action Potential V: Design and Analysis of Complex Neurons

  • This unit will provide you with the first opportunity to do a more extended analysis of a neural simulation.
  • Although it would be nice if this were ever true, it is never the case that biological neurons come with helpful, labeled parameter boxes that allow you to easily and quickly change their properties.
  • Instead, each neuron is a novel and unknown system.
  • Thanks to the evolutionary history of neurons, it is usually very likely that a novel neuron will have many of the conductances that are found in other neurons that have been studied previously, even in very different species, though of course there will be variations, and on occasion, you may find an entirely new type of ion channel.
  • The major tools at a neurophysiologist's disposal when studying a new neuron will be those you have learned about: current clamp, voltage clamp, and patch clamp.
  • In addition, he or she may have access to a range of pharmacological agents. These are generally drugs that have been found (or have been created) to block a specific ion channel.
    • For example, the cone snail injects its prey with a venom consisting of a variety of peptides (short sequences of amino acids), and many of these act to paralyze prey by binding to specific ion channels.
    • In particular, -conotoxin is known to bind to N-type calcium ion channels.
    • Thus, you will be provided with current clamp and voltage clamp simulations that include options for adding pharmacological agents. Using these simulations, you will be studying the properties of hypoglossal motor neurons in neonates (newly born rats) and in adult rats.
      • Using these tools, you will be asked to define how the neuron changes as an animal matures, and, in particular, to measure changes in the maximum conductance of the specific ion channels that are relevant to this difference.
      • It is intrinsically interesting to understand how the nervous system changes with development, since this clarifies the way in which it self-assembles.
      • It is also of considerable medical interest, since many diseases are the result of problems with the developmental processes. Understanding how a neonatal neuron differs from an adult neuron can be the basis for a rational therapy that allows a neuron that has not properly developed to be manipulated pharmacologically to act in ways that are more similar to an adult neuron, and also helps to pinpoint the genetic defect that caused the changes, which in turn could lead to a genetic therapy for the disease.

      Here is a voltage clamp simulation of the multi-conductance neuron that you studied in the previous class. The simulation is based on studies of neonatal rat hypoglossal motor neurons:

      Choose the Voltage clamp simulation with additional conductances. Set the Holding potential to -80 mV, the Step delay to 100 ms, the Step duration to 100 ms, and the Total duration to 300 ms. Looking at the graphs of the currents, conductance, and gates, please answer the following questions:

      • Question 1: What is the reason that the fast potassium current shuts off during the depolarizing pulse? What other current that you have learned about does this?
      • Question 2: What is the reason that the sag current does not return to its resting value after the depolarizing pulse?
      • Question 3: Please look at the Intracellular calcium concentration, and the Calcium Currents, Conductances and Gates. Note that the calcium concentration continues to increase after the depolarizing pulse. Explain.
      • Question 4: What is the reason that the calcium-dependent potassium current goes to zero after the depolarizing pulse ends, even though its conductance has not fallen to zero? Explain.

      You are now ready to start analyzing the differences that occur in hypoglossal motor neurons that arise over the course of development. Below, there is a link to a current clamp simulation that allows you to "toggle" between the adult and the neonatal hypoglossal motor neurons. You may want to open the simulation twice, in two different tabs or windows, and click on the button labeled "Adult simulation" in one of these windows (the default simulation is the "Neonatal simulation"). This way you can directly compare data from the two phases of development in two different windows.

      In this current clamp simulation, all of the cell parameters are hidden from you you are measuring only the membrane potential. For the following questions, you will make observations about the behavior of the neonatal and adult neurons, and develop hypotheses explaining your observations. Make sure to take snapshots of the data that you get, and to write down your hypotheses. You will test your hypotheses later, using voltage clamp.

      • Question 5: Compare the action potentials in the neonatal hypoglossal motor neuron to the action potentials in the adult hypoglossal motor neuron. How do they differ? Which of the different conductances you have previously studied could be responsible for this difference?
      • Question 6: Set the Total duration of the simulation to 150 ms, and set the Pulse duration to 100 ms. How do the responses of the two kinds of neurons differ from one another? Which of the different conductances could be responsible for this difference?
      • Question 7: Change the Stimulus current first pulse from 2 nA to -2 nA (i.e., inject hyperpolarizing current into both model neurons). How do the responses of the two kinds of neurons differ from one another? Pay attention to the scales on the plots. Which of the different conductances could be responsible for this difference?

      Given your results from Questions 5, 6, and 7, you are now ready to test your hypotheses about which currents may change during development from birth through adulthood. To test these hypotheses, you can use the simulation below, which provides you with voltage clamp tools and pharmacological agents that can specifically block different conductances. Unlike the previous voltage clamp simulations, all you will be shown is the total membrane current, which is equivalent to the sum of all the individual currents by using the drugs, you can dissect out the sources of this current (which is what you would do in a laboratory). Once again, you are encouraged to create two windows, one containing the "Adult simulation" and one containing the "Neonatal simulation" so that you can directly compare data from the two phases of development in two different windows. Again, make sure to take snapshots of the data that you get, and to write down your hypotheses prior to doing your experiments, and make sure to reflect on whether the data do or do not correspond to your hypotheses. Note that in these simulations, the capacitative current has been subtracted out to make your analysis easier.

      The different abbreviations used with each drug are the same that you saw in the multiple conductance simulations and are summarized here:

      K Delayed rectifier potassium conductance (this is the original Hodgkin-Huxley potassium conductance you first learned about)
      A Fast transient potassium conductance
      SK Calcium dependent potassium conductance
      Na Fast transient sodium conductance (this is the original Hodgkin-Huxley sodium conductance you first learned about)
      NaP Persistent sodium conductance
      H Sag conductance
      T, N, P Various calcium conductances

      You can assume that the differences you observe between the neonatal and adult neurons are caused by changes during development in the levels of expression of different types of ion channels. That is, the neonatal and adult neurons differ because some of the conductances strengthened or weakened as the animal matured. In Question 8, you will begin to determine how these conductances change by first determining the leak conductance.

      • Question 8: To begin to assess the source of the differences between the neurons at the different developmental stages, it is useful to see if they can be made more similar to one another if the voltage-dependent conductances are removed. Change the First step potential to -90 mV, and apply all the drugs to the Neonatal simulation (by checking all the boxes) and to the Adult simulation (by checking all the boxes).
        • What is the response of the total membrane current to the hyperpolarizing voltage step?
        • Calculate the conductance of the membrane for both the neonatal and adult from these data. To do this, use the change in total membrane current and the change in membrane potential with Ohm's law to find the total membrane conductance, Since all other conductances are blocked, the total membrane conductance must be equal to the leak conductance, i.e., .
        • What can you say about the conductance of the leak current in the neonatal and in the adult based on this calculation? Is it the same or different?
        • Question 9: You will now begin to analyze the voltage-dependent conductances one at a time. You will design voltage clamp protocols that allow you to estimate the maximum conductance for each voltage-gated current in both the neonatal neurons and the adult neurons.
          • Create a table in your lab notebook for organizing your data. Click this link, and copy the table template into your notebook. For all your measurements and calculations, always include appropriate units!
          • For each of the nine voltage-gated conductances, do the following:
            • Using the pharmacological agents provided, block all conductances except the one you are currently studying.
            • Find a voltage clamp protocol that strongly activates the conductance. That is, find a holding potential, a step potential, and a step duration that results in a total membrane current that is significantly different from the leak current alone. Once you have done so, record the holding and step potentials in your table, and compute and record the step size (). Remember to include the right units! Here are some things to keep in mind:
              • Both the current you are interested in and the leak current are summed together in the plotted membrane current, so you need to elicit a large current in addition to the leak current.
              • Since some conductances are activated by depolarizations and others by hyperpolarizations, you should try both to see which works best. Your previous experiences studying these conductances should help you get started.
              • Extreme steps in voltage can damage cells, so you should avoid clamping the membrane potential far outside its normal range.
              • Since some conductances take a long time to activate, you may need to significantly extend the duration of the step (and the simulation) to see the maximum current. In some cases, this may take hundreds of milliseconds.
              • If you note that a current takes a long time to activate, you may want to measure the conductance using a tail current protocol:
                • When a conductance takes a long time to activate, this means that several gates must open. Thus, at the end of a long hyperpolarizing or depolarizing pulse, the conductance reaches its maximum value.
                • However, it only takes one gate closing to stop the current, and this usually happens with very little delay.
                • Thus, if you measure the maximum current through a specific channel near the end of a long hyperpolarizing or depolarizing pulse, and then find the minimum value of the current after the pulse ends, the difference in these two current values can be used as a fairly accurate measure of the peak channel conductance.
                • For the calcium-dependent potassium (SK) conductance, you will additionally need to subtract out the conductance of the calcium current you have enabled (which you will need to find independently using the same voltage clamp protocol and measurement timing that you are using for the SK conductance).
                • Question 10: Your table should now summarize the differences between the adult and the neonatal neurons.
                  • Which conductances are unchanged during development?
                  • Which conductances differ between adult and neonatal neurons?
                  • Do these data support the hypotheses you formulated in Questions 5, 6, and 7?
                  • As a final test, open the Current clamp simulation with additional conductances used in the previous class. This model is identical to the current clamp model of the neonatal neuron you were working with earlier. For each conductance that varies between neonatal and adult neurons, change the maximum conductances using the ratios you computed in Question 9 as scaling factors. Do this for all the differing conductances simultaneously so that you can reconstruct the adult neuron. (You may notice that some of the maximum conductances shown in this simulation are larger than the maximum conductances you measured for the neonatal neurons. This may be because your voltage clamp protocol did not fully activate the conductance. This is why you should use the ratio to scale the conductances in this simulation, rather than use the maximum conductances you found for the adult simulation.)
                    • Repeat the measurements done in Questions 5, 6 and 7 above using your reconstructed adult neuron. (You will need to increase the Stimulus current first pulse to 2 nA so that this simulation matches the simulation you used in those questions.) Do you obtain the same results? Include pictures in your notebook.

                    Surgery of the face

                    24.6 Warning criteria and correlation with outcome

                    If the CMAP amplitude diminishes by more than 50% from supramaximal baseline CMAP amplitude, during surgical maneuvers like tissue stretching or compressing, the surgeon is alerted. Almost all CMAP decrements related to nerve stretch or compression recover within minutes by merely releasing the tissue ( Fig. 24.12 ). However, if CMAP amplitude diminishes by more than 70% from baseline, during surgical maneuvers where decrements are considered irreversible, for example, during bipolar or monopolar coagulation, dissection, or cutting, the surgeon is immediately alerted to cease the surgical action. In the case of suspecting branch injury, intraoperative mapping with a sterile handheld probe can reveal if such injury occurred. The branch is mapped inch by inch (inching technique), including the segments proximal and distal to the site where the nerve damage is suspected to happen. The injury is confirmed when there is a decrement in the CMAP amplitude between the proximal and the distal segments. The percentage of decrement provides estimation on the number of damaged fibers by the surgical maneuver for a given branch.

                    Figure 24.12 . (A and B) Traces from two different patients. Each column presents consecutive CMAP recordings of a muscle labeled on the top. Panel (A) shows complete conduction block of the buccal branch due to the prolonged suspension during excision of the lesion. Upon release of the branch, CMAP amplitude partially recovered. Panel (B) shows a partial conduction block of the temporalis branch. CMAP, Compound muscle action potential.

                    In addition, such decrement correlates with the decrement between the supramaximal baseline CMAP amplitude and the CMAP amplitude after the injury happened for that specific branch. In other words, to assess the severity of the injury, either these of the two comparisons between CMAP amplitudes can be used: (1) between proximal CMAP amplitude to distal CMAP amplitude by intraoperative mapping and (2) between supramaximal baseline CMAP amplitude to after-injury CMAP amplitude for that branch by FN trunk stimulation. If the decrement between both CMAP amplitudes is ≥70%, complete palsy of the muscles innervated by that branch and poor long-term recovery is expected. If the decrement is between 50% and 70%, different degrees of partial palsy are likely but predicate a relatively good long-term recovery.

                    These warning criteria and neurophysiologic approach to a suspected branch injury during surgery are concordant with other studies [2,9,17] . The importance of preserving at least 30% of branch fibers for avoiding complete postoperative palsy persistently emerges from their data. Either expressed as a 1:3 ratio [17] or percentage [9] , the “70% decrement rule” will determine the immediate postoperative outcome and long-term recovery of the FN function.


                    The article is extremely confusing, there is a lot of material and definitions which are, however, poorly organized. A lot of repetitions and the flow of argument is completely absent. I'll see what i can do.
                    It is incredible that it is a FEATURED ARTICLE with so many problemsRvfrolov (talk) 18:32, 2 January 2009 (UTC) 2 January 2009

                    There are three articles now which discuss very much the same things with numerous repetitions and description of the same material in different terms.

                    As it was already suggested by Methoxyroxy 12:37, 2 November 2006 (UTC), it needs a really big clean-up and optimization. There is a lot of confusion there so I will do this albeit not at once. I will move different parts between these three articles, edit and unify their style etc. At later stage I will need someone who is native English speaker to do spellcheck.Rvfrolov (talk) 20:52, 2 January 2009 (UTC)

                    I have replied at Talk:Membrane potential. Looie496 (talk) 22:01, 2 January 2009 (UTC) Hi Rvfrolov. The problem is that explaining an action potential, without first describing what the 'mV' part of membrane potential is, can lead to a whole lot of problems down the road. I'm all for reorganizing it, just very carefully Paskari (talk) 15:41, 24 June 2009 (UTC)

                    The current version of the article mixes action potentials with propagation of potentials creating excessive complexity and inaccuracies.

                    Potential propagation is not a required property of an action potential. Most textbooks first define the action potential in an isopotential cell. In Hodkin and Huxley's experiments, for example, a wire was strung along a squid giant axon to shunt axial currents effectively producing an isopotential membrane compartment. With this preparation, current-clamp experiments still produce action potentials along the entire fiber, without a "wave of electrochemical activity." Neither does a cell need to carry APs over a distance to make use of action potentials (e.g. electrocyte action potentials in electric fish, and many other cell types produce action potentials for other reasons than long-distance signaling).

                    The current definition is also missing a key defining component: the key role of voltage-sensitive conductances.

                    Propagating action potentials may be more consistently described as a continuous succession of local action potentials triggering action potentials in the adjacent sections. This distinction would help avoid some of the current inaccuracies in the article.

                    For example, the current article describes saltatory conduction as follows "Since the axon is insulated, the action potential can travel through it without significant signal decay." In reality, myelinated stretches of axons do not produce action potentials and the action potential does not "travel through it." It would be more accurate to state that depolarization from an action potential at one node propagates passively to the next node and triggers an action potential at the next node. The signal may decay significantly between nodes and still trigger an action potential at the next node. The same section seems to imply that the action potential must be generated at the synapse for the release of neurotransmitter, which is also inaccurate. Any depolarization of sufficient magnitude (passive or active) will have a similar effect.

                    In summary, to make the article more useful, I recommend providing a complete and general definition of the action potential with minimal extraneous detail. The adjacent topics such as "Electrotonic propagation of potentials", "Neurotransmitter release", "Cable theory", "Saltatory conduction", probably belong in separate articles or sections.

                    Thank you for your attention to this, and for your very thoughtful analysis. In large part, I agree with you. Your analysis of propagation seems to me to be correct. I have felt for some time that the lead section of the page is too long and rambling, and should be shortened, and I think it would be an improvement to simplify it per WP:LEAD and per your comments. I also think errors in the explanation of action potentials, of the sort you identified, should be corrected as they come up throughout the text. However, I'm not so sure about splitting off separate articles. The topics you list are not all sections in the current page, and the sections related to these topics do begin with main article links already, and it is appropriate to discuss each of those topics in this page (with the possible exception that discussion of the passive propagation of subthreshold graded potentials should be limited to their relevance to action potentials). Therefore, I would tend to favor simplifying the lead, and correcting errors elsewhere, but not necessarily splitting off any new pages or merging material here to other existing pages. --Tryptofish (talk) 15:44, 27 July 2009 (UTC) I'm pretty much in line with that response. On one side, this article has become much too large and disorganized, probably because of the lack of anybody actively maintaining it. So simplifying the article would be a good thing. On the other side, propagation is probably the reason why action potentials exist, so omitting any discussion of it at all would be bad. Neurotransmitter release, however, might not belong here beyond a brief sentence or two to explain what happens when an action potential arrives. Looie496 (talk) 21:41, 27 July 2009 (UTC) Just to clarify: what both I and Yatsenko agree is that propagation of the action potential, in the sense of an intact action potential just traveling along, is presented misleadingly. On the other hand, propagation in the sense of passive propagation of a depolarization which then brings membrane potential to threshold, activating voltage-sensitive ion channels and regenerating an action potential, is correct, and nobody wants to omit that. (Did that clarify, or make it worse? (smile)) --Tryptofish (talk) 21:51, 27 July 2009 (UTC) Propagation is not a necessary feature of an action potential and should be relegated to a later section. Otherwise the definition is simply incorrect because it would not apply to the phenomenon that was described by Hodgkin and Huxley. Action potentials can be and are produced without propagation in many experimental preparations. Voltage-gated channels (active conductances) are a required part of an action potential and should be included in the primary definition. I would first define the action potential in the case of an isopotential compartment. Dimitri Yatsenko 00:58, 28 July 2009 (UTC) Although the lead is now more accurate, it is way too long and too technical for a general readership. We need to look at ways to make the wording less technical, and to move parts of the lead into other parts of the page. --Tryptofish (talk) 19:02, 28 July 2009 (UTC)

                    I am removing "nerve spikes" from the first sentence.

                    I do not believe that the term "nerve spike" could be generally applied to all action potentials. Action potentials are transient membrane voltage events in individual cells or their compartments. They happen in many cell types. Nerves are bundles of axons in the peripheral nervous system (no nerves in the brain or spinal cord). Thus "nerve spikes" are but one specific manifestation of action potentials. The same could be said about MUAPs (motor unit action potentials), for example. This does not make them synonymous with action potentials. Dimitri Yatsenko 01:25, 28 July 2009 (UTC)

                    I don't have much of an objection to removing the mention of spikes, which is a bit colloquial. However, I reverted your deletion of "nerve impulse." I did that because, first, it is a widely-used synonym (even though not all action potentials are in neurons), and secondly, because nerve impulse is an existing redirect to this page, and therefore the phrase needs to be bolded in the lead sentence. --Tryptofish (talk) 19:00, 28 July 2009 (UTC) I have not seen the term "nerve spike" or "nerve impulse" used as a general synonym for action potentials in any modern scientific literature. An axon is not a nerve. Neurons are sometimes called nerve cells in less technical usage, but this is inaccurate and would not pass in a scientific paper. So I am conflicted between being precise or catering to general nontechnical usage of terms. I think we should strive to be technically accurate and, in so doing, influence the popular understanding of these natural phenomena. Dimitri Yatsenko 19:23, 28 July 2009 (UTC) I wasn't disagreeing with you about spikes. Impulse is used widely in English. Besides: WP:R#PLA and number 7 of WP:NOTGUIDE. --Tryptofish (talk) 19:32, 28 July 2009 (UTC) What if we separate the article for "nerve spike" and "nerve impulse" and explain that it is but one special case of action potential that is recorded from nerves in the PNS? Dimitri Yatsenko 19:31, 28 July 2009 (UTC) No, not needed, and WP:CFORK. --Tryptofish (talk) 19:34, 28 July 2009 (UTC) I defined "spikes" and "impulses" in separate sentences in the first section. What do you think? Dimitri Yatsenko 20:36, 28 July 2009 (UTC)

                    The current wording in the second paragraph states that depolarization "increases both the inward sodium current (depolarization) and the balancing outward potassium current (repolarization/hyperpolarization)". I question the accuracy of that statement.

                    As the membrane depolarizes, the membrane potential moves toward the reversal potential for sodium. This reduces the electrochemical driving force for sodium. Unless the sodium conductance increases by a greater factor to compensate, the sodium current will decrease, not increase. So the statement is not generally accurate. I propose rewording it to state that both conductances increase and only when the net current is negative and leads to further depolarization, a positive feedback loop is generated to precipitate the action potential. Dimitri Yatsenko 21:07, 28 July 2009 (UTC) —Preceding unsigned comment added by Yatsenko DV (talk • contribs)

                    Sorry to disagree, but the present text you quoted is accurate, and the changes you propose are way too technical for this project. --Tryptofish (talk) 21:35, 28 July 2009 (UTC) Fair enough. I agree that this simplification is accurate for relevant time scales, membrane potential ranges, and channel types. Dimitri Yatsenko 22:05, 28 July 2009 (UTC) —Preceding unsigned comment added by Yatsenko DV (talk • contribs) Thank you for understanding. A lot of this is just a matter that we (Wikipedia) are writing for a general audience, and that puts limits on how technical or scholarly we can get. --Tryptofish (talk) 22:56, 28 July 2009 (UTC)

                    In the "Quantitative models" section there are many references to things being simple or a simplification. While this section does have many references attached to it, there is no mention in the article of what these things are simpler than. That is, why are these things simple, and compared to what, and what would be more complex.

                    Reply to unsigned comment: What it means is that mathematical equations do not capture all the complexity of a living cell. I've tried to make it a little clearer, but I'm not sure whether there is any way of saying it better. --Tryptofish (talk) 23:25, 3 September 2009 (UTC)

                    The refractory period section seems like it was copy pasted from a text book which was written by a high school teach held at gun point. Perhaps we should consider updating it Paskari (talk) 23:30, 6 October 2009 (UTC)

                    I've taken a shot at rewriting it. Cellular stuff is not really my strength, so if I got anything wrong, I hope somebody will correct it. Looie496 (talk) 00:10, 7 October 2009 (UTC) I'm just about to sign off, but I'll take a look at it tomorrow. --Tryptofish (talk) 00:14, 7 October 2009 (UTC) That is much better, great job. Paskari (talk) 11:02, 7 October 2009 (UTC) Yes, much better, thanks. I tweaked it a little further, not much. --Tryptofish (talk) 18:34, 7 October 2009 (UTC)

                    I've just attempted a pretty major rewrite of the lead, which I hope won't offend anybody. I thought the existing version was too hard for readers to understand -- it also contained a couple of minor errors. I also added a paragraph about the distinction between sodium and calcium spikes, which seems to me to be a very important point. Regards, Looie496 (talk) 20:08, 23 February 2010 (UTC)

                    I note that the lead from the last FAR was moved some time ago to the overview section. If this persists, we should justify the need for both a lead and overview section, and ensure that they are not redundant to each other. Geometry guy 20:39, 23 February 2010 (UTC) The Overview section needs revision too, but it seemed to me that these changes to the lead were sufficiently "bold" that it would be better not to pile other changes on top of them before discussion. Looie496 (talk) 20:48, 23 February 2010 (UTC)

                    I have removed a reference from the lede that was to:

                    • Miller FP, Vandome AF, McBrewster J (2009). Cardiac action potential. Beau Bassin Mauritius: Alphascript Publishing. ISBN6130098685 . CS1 maint: multiple names: authors list (link)

                    Alphascript Publishing republished Wikipedia content. And the book in question republishes this article. The cover of the book can be seen [ on Amazon]. This article is named on the front cover. (The format for Alphascript books is to list the WP articles contained therein on the front cover as part of the name.)

                    The person who owns the book can verify that this is republished Wikipedia content by looking at the copyright information inside the book itself. -- RA (talk) 13:16, 28 February 2010 (UTC)

                    The other danger is Alphascript also publishes academics' thesis if they convince them to sign their terms. Was the source republishing of wiki article or a thesis. So always double check before removing any references about whether they are wikipedia article or a thesis. Generally a quick way to check is searching product description in wikipedia. Kasaalan (talk) 13:29, 28 February 2010 (UTC) It is VDM that also publishes academics' thesis if they convince them to sign their terms. All alphascript titles are wikipedia articles. My mistake. Kasaalan (talk) 19:14, 5 March 2010 (UTC) Note: I have edited the previous comment, because it was added by altering the comment above it in a way that made this section impossible to understand without going back through the history. I hope that my revision has not changed the message. Looie496 (talk) 19:26, 5 March 2010 (UTC)

                    The region with high concentration will diffuse out toward the region with low concentration. To extend the example, let solution A have 30 sodium ions and 30 chloride ions. Also, let solution B have only 20 sodium ions and 20 chloride ions. Assuming the barrier allows both types of ions to travel through it, then a steady state will be reached whereby both solutions have 25 sodium ions and 25 chloride ions. If, however, the porous barrier is selective to which ions are let through, then diffusion alone will not determine the resulting solution. Returning to the previous example, let's now construct a barrier that is permeable only to sodium ions. Since solution B has a lower concentration of both sodium and chloride, the barrier will attract both ions from solution A.

                    There is not a single citation about osmosis. Osmosis tells us exactly the contrary. Facts tells us the same thing as osmosis: The cited diffusion doesn't occur. The concentrations may be equilibrated by water movement and membrane is permeable to water through aquaporins or directly. Somasimple (talk) 05:28, 3 June 2010 (UTC)

                    If solution A is electroneutral THEN 30n+30p=0 (where n stands for negative and p for positive). If solution B is also electroneutral THEN 25n+25p=0. Considering an action from a compartment onto another one orders to consider all positive and negative charges that exist in the compartments.

                    So, there is NO electric flux OR electric field BECAUSE EACH compartment is neutral at start. Saying a compartment is neutral is saying that it can't exert any electric "thing" at all.

                    Conclusion: You can't get something that is the result of k(25p/30p) or k(30p/25p). That is mathematically and physically incorrect because you arbitrarily remove the negative charges without any scientific explanation. Somasimple (talk) 09:27, 3 June 2010 (UTC)

                    Can you fix it, or should that part be removed? Looie496 (talk) 00:52, 4 June 2010 (UTC) Are you asking me to change the way how biology is taught ? This page remains for historical reason (Nobel prizes) but its contents is far from actual and accepted knowledge in Biochemistry for example. If the goal of wikipedia is to promote science then you must re-write the page but it will against the Biology community.Somasimple (talk) 06:10, 4 June 2010 (UTC) Wikipedia articles are written by people like you and me. If you see errors in an article, and can back up the claim that they are errors by referring to reputable scientific publications, then you should feel free to rewrite the section in a way that makes it more correct. In this case, if you don't fix it, it's likely that nobody else who reads this will be able to. I certainly can't. Regards, Looie496 (talk) 17:04, 4 June 2010 (UTC) This is my area of expertise more than it is Looie's, so I think I can help here. I think the page is correct about this, as it is written. There are several errors in what Somasimple has said here. First, this is not an osmotic phenomenon, in that we are not dealing with H2O molecules moving along with the ions. Second, there are two factors driving ionic movement: electroneutrality or like-charge repulsion, as mentioned, but also entropy. Entropy will cause, in the quoted example, the ions to move from A to B. When they do, electroneutrality will be achieved when there are 25 plus 25 in A, and 25 plus 25 also in B. --Tryptofish (talk) 18:23, 4 June 2010 (UTC) Several errors? Did I say it was an osmotic phenomenon? No! I just said there was NOT a single citation about it. Osmosis exists whenever there is a concentration change, just whenever! THEN OSMOSIS EXISTS WHENEVER SOME IONS MOVE. There must be some osmosis because it's a reverse diffusion. If you put a cell (neuron is a cell) in an hypotonic solution, osmosis happens and the cell expands because the internal concentration decreases by water flux. It is a fact. This fact creates an error in "your" silent diffusion that occurs in the other direction. I like, again, your "entropic electroneutrality". It is the first time I heard/read that a charge vanishes by entropy. A citation, a reference? In our example, it was clear (at least for me) that the membrane was semipermeable thus the result is not the one you gave. "Returning to the previous example, let's now construct a barrier that is permeable only to sodium ions. Since solution B has a lower concentration of both sodium and chloride, the barrier will attract both ions from solution A." The difference remains because negative ions remain in one side. It creates the membrane potential but you're right, it raises another big problem. You have now, a side that is negative and another that is positive and diffusion will have some problem to be achieved -( Somasimple (talk) 05:19, 5 June 2010 (UTC) I never said that entropy makes a charge disappear. I said that it can make it move. The reason that excitable cells do not shrink or swell due to hyper- or hypo-tonicity is that the ions that move across the membrane represent a very small fraction of all of the ions that are present (in real cells, though not in the example). You also might want to familiarize yourself with the Nernst equation. --Tryptofish (talk) 14:24, 5 June 2010 (UTC) You do not reply at all. Where do the negative charge move in our example (the membrane is only permeable to Sodium)? Somasimple (talk) 10:14, 6 June 2010 (UTC) If the membrane is permeable only to sodium cations, then anions do not cross the membrane at all. Consequently, there is a separation of charge, giving rise to a transmembrane voltage difference. --Tryptofish (talk) 15:17, 6 June 2010 (UTC) If it was so simple. As you know, at molecular level (the level we are speaking of), distances are of importance. The electrochemical force you created comes between 2 compartments separated by a membrane which thickness is known as 5 to 7 nm. It means that anions and cations must be separated, in all compartments, by a distance that is always superior to the membrane thickness. If the distance is lower in any compartment then you have an "entropic" problem (in fact I call it a simple Coulomb force): anions or cations can't be attracted by the other side since the strength of the force coming from the other side is not sufficient. This limits the process to concentrations < to 5 mmol. Far from the concentrations that exists in cells. I think you might NOT consider the Nernst equation because it will give you some headaches with charge density and Conservation of Energy. Somasimple (talk) 05:32, 7 June 2010 (UTC) Somasimple, consider a parallel-plate capacitor, which is how the cell membrane is represented in the Hodgkin-Huxley model. Note that the capacitance C is proportional to the area of the charged plates divided by their separation[1]. For a membrane thickness of about 5nm, you still have a significantly large area in which the ions are able to arrange themselves (even an impossible miniscule cell with a central body length only ten times the width of the membrane will have a "plate" area 100 times larger). The point is that capacitance will be very large, such that even if 5nm were a very large distance for Coulomb attraction (which it very much is not), it won't matter because there are so many ions able to line up along the membrane, just like in an ordinary circuit-board capacitor. And from there, depolarization occurs when you suddenly open the ion gates and sodium floods in, etc etc etc. SamuelRiv (talk) 17:21, 14 June 2010 (UTC) I'm considering effectively a capacitor and you do not consider the distances that effectively exist between the ions in presence (here is link to physics principles). Even if the surface is enlarged then you decreases the charge density and it matters for capacitance : the less charge density you have the less tension you'll get. In our case ions can't be attracted from the other side: TOO FAR! --Somasimple (talk) 06:04, 15 June 2010 (UTC)

                    The inward movement of sodium ions and the outward movement of potassium ions are passive

                    Let's describe all the events that happen simultaneously:

                    1/ Sodium movement balanced with chloride

                    sodium is inward and Na ions stick to the internal membrane, chloride ions stay out, and balance the Na charge, across the external membrane

                    2/ Potassium movement balanced with chloride

                    potassium is outward and K ions stick to the external membrane, chloride ions stay in, and balance the K charge, across the internal membrane

                    Now let's see what happens on each side:

                    sodium is inward and Na ions stick to the internal membrane, chloride ions stay in, and balance the K charge, across the internal membrane

                    chloride ions stay out, and balance the Na charge, across the external membrane, potassium is outward and K ions stick to the external membrane

                    Result: a membrane voltage that is. quite null.

                    Osmosis: Since there are concentrations changes there is water flux through aquaporins:

                    1/ from int to ext for sodium

                    2/ from ext to int for potassium

                    Result : How is it possible to make a bidirectional and simultaneous water movement in aquaporins? Somasimple (talk) 05:57, 5 June 2010 (UTC)

                    Sorry, but you misunderstand. Ion channels are not aquaporins, and they are not permeable to water molecules. In vertebrate animals, aquaporins are mainly expressed in the kidneys, and there is relatively little water transport during an action potential. Ion channels are selectively permeable to ions, so chloride does not move together with cations also there is a differential distribution across the membrane of impermeable anions. The reason there is a membrane potential at all, is that there is a separation of charge. If you continue to disagree about all of this, please cite sources. --Tryptofish (talk) 14:21, 5 June 2010 (UTC) Ions channels are not permeable to water molecules? Really? Molecular dynamics of the KcsA K(+) channel in a bilayer membrane Somasimple (talk) 10:22, 7 June 2010 (UTC)

                    About this section Myelin and saltatory conduction It is said:

                    1. "The evolutionary need for the fast and efficient transduction of electrical signals in nervous system resulted in appearance of myelin sheaths around neuronal axons."
                    2. "Myelin prevents ions from entering or leaving the axon along myelinated segments."

                    The first assertion is false since every axon is covered by myelin compact or not, leaving no room (<20 nm) around the axon. See the excellent book, page 128 [ Neurocytology: Fine Structure of Neurons, Nerve Processes and Neuroglial Cells]

                    The second becomes, in that case, not true since it assumes that unmyelinated axons are bare. --Somasimple (talk) 05:43, 9 June 2010 (UTC)

                    Well, this topic I do know about. Myelin appears only in vertebrates (although some other groups have similar substances), and even in vertebrates only a subset of axons are myelin-coated. I don't have that specific book on hand, but every basic neuroscience book covers this point very thoroughly. Looie496 (talk) 06:44, 9 June 2010 (UTC) Here is a link to the book Ennio Pannese Google book There are citations on page 119 and following ones about evolutionary aspects. On page 128, if vertebrates have always axons that are insulated, how do they function since the ions exchanges can't happen? --Somasimple (talk) 07:28, 9 June 2010 (UTC) From this one The Biology of Schwann Cells The Biology of Schwann Cells: Development, Differentiation and Immunomodulation Edited by Patricia Armati: "All neurons in the PNS are in intimate physical contact with Schwann and satellite cells, regardless of whether they are myelinated or unmyelinated, sensory or autonomic. All axons of the peripheral nerves are ensheathed by rows of Schwann cells, in the form of either one Schwann cell to each axonal length, or in Remak bundles, formed when an individual Schwann cell envelopes lengths of multiple unmyelinated axons (Figures 1.2, 1.3 and 1.4b). There is now a large body of evidence that defines a multitude of Schwann cell functions that are not related to myelination (Lemke 2001). This uncoupling of myelin-associated functions from other Schwann cell roles emphasises the essentially symbiotic relationship between nerve cells and Schwann cells, where each is dependent on the other for normal development, function and maintenance." Here is the link to the excerpt --Somasimple (talk) 10:07, 9 June 2010 (UTC) You raised a lot of points that need to be addressed so I'll try to hit them all, in no particular order. I must have missed your quotes above (#1 & #2) in the original wikipedia article as number 2 is incorrect (it's mostly due to capacitance changes, there would have been other ways of just removing channels from the membrane to minimize ionic current. but then there's still all that capacitative loss with no regenerative ionic current). Regarding number 1, this is true, though as mentioned in the book to which you linked other organisms have attained similar sorts of results in other ways (e.g. the squid giant axon). I should note that being in intimate contact and being ensheathed are VERY different (and really when we say myelinated what's meant is compact ensheathment, which is somewhat different still). Regarding ion movement, see the discussion of the Node of Ranvier. I don't work on invertabrates, so I don't know how they deal with such things, presumably there's either enough space (This is the case for skeletal muscle, where the fibers are packed like sardines but there's still enough space for things to work. There are some computational modeling studies of these sorts of things on pubmed.) or there are random holes/gaps similar to nodes of Ranvier. The important points from that is that the picture of unensheathed axons in a swimming pool of ions of constant concentration isn't really correct (but normally a close enough approximation) and Schwann cells aren't a one-hit wonder. --Dpryan (talk) 20:56, 10 June 2010 (UTC) From the excerpt: "In a mixed peripheral nerve unmyelinated fibres outnumber myelinated fibres by a ratio of three or four to one (Jacobs and Love 1985). For example, a transverse section of a human sural nerve contains approximately 8000 myelinated fibres per mm2, whereas the unmyelinated axons number 30 000 per mm2". I think that the approximation you made about the pool is quite far from the reality of anatomy? The constant concentration is perhaps not achieved at all! --Somasimple (talk) 06:28, 11 June 2010 (UTC) For comparison, a t-tubule is often 20-40nm in diameter and even then the ion concentrations don't change that much (you'll get a plateau after a few stimulations in a prolonged train, and the change will only be a few mM). You really don't need a lot of ions to move for an AP to occur, otherwise we would have very different anatomy! So, as I said, the approximation usually works for non-extreme cases (a square mm is a fair bit of space). --Dpryan (talk) 17:43, 11 June 2010 (UTC) I TOTALLY agree with you. Saying that concentrations remain unchanged may be reformulated that way Only, a tiny fraction, acts and creates all the effects. It may be refined in another way The unchanged big portion isn't involved in any manner in the process. --Somasimple (talk) 05:18, 12 June 2010 (UTC)

                    In this article [Cable Theory] the conduction velocity depends greatly of the time constant that is result of τm=Cm*Rm
                    It is said that myelin decreases the membrance capacitance. That's seems OK but what happens to the membrane resistance in case of myelinization?
                    Computation of the the time constant with reasonable values leads to an increase of the time constant:
                    You may see a discussion about this problem.--Somasimple (talk) 10:55, 11 June 2010 (UTC)

                    The discussion you linked to sets it clearly: membrane capacitance decreases while membrane resistance increases, so there is no charge dispersed through the membrane. Think of a capacitor - the plates have their charged particles line up on each end which drops the voltage through the circuit, so to maximize voltage propagation we need to maximize the time constant *across the membrane* and minimize the time constant *through the axon*. That's probably where your confusion lies - you need variables for each direction. Each node can change direction of propagation by depolarizing sequentially, with the potential difference propagating along a straight line each time. SamuelRiv (talk) 23:06, 13 June 2010 (UTC) Vey. My confusion comes from the [ book of Koch]. pages 10 and 167. The text is clearly speaking of the τm=Cm*Rm, nothing else. BTW, your comment contradicts the comments below (next section) where internal current becomes negligible. --Somasimple (talk) 05:15, 14 June 2010 (UTC) Okay, I might have gotten confused, but here's what I was saying before (hopefully clearer): there are two R-C-I's here: one for the current between the inside of the axon and the salt medium outside the cell (across the membrane), and the other through the axon from one node to the next. In the first case (across the cell membrane), R is big, C is small, and I is minimal. In the second case (through the axon between nodes), R is small, C is big-to-infinite (as a wire), and I is V/R from the depolarization. Note that charge carriers do not actually flow with appreciable speed through the axon, the same as in electric conductivity through a wire where electrons flow at about 0.1mm/s - rather, the current propagates as a potential difference from one node to the next (as a big capacitor with the dielectric having conductive properties in the cations that I don't know of and should look up). SamuelRiv (talk) 08:20, 14 June 2010 (UTC) That's NOT OK at all: Please give a reference about this second internal capacity. How is it connected with the internal R? In parallel or serial?. Here is mine Koch circuit Secondly, if you exchange electrons then you have an electric current that travels at light-speed even if the electrons themselves don't (Electric circuit). The internal resistance is BTW higher than you tell us. --Somasimple (talk) 10:11, 14 June 2010 (UTC) In the Koch pdf (I have the same book btw, though in a box somewhere), there are two separable circuits. One is bracketed and labeled "node", while the other lies over the internode and none of the circuit elements are labeled. The first one represents the membrane potential between the inside of the axon and the outside cations, while the other models the internal resistance through the myelinated portion of the axon between nodes. Those are what I was referring to before as "two separate R-C-I's", the first being across the membrane and the second being through the axon. Yes, the current does effectively travel at light-speed as it simply is electric polarization between two points, same as a metal wire (which I used as an example to illustrate that electrons move extremely slowly compared to light-speed current). Another illustration similar to Koch is in this review (section: Active properties of nerve fibers) where Ra is the resistance of the "wire" that is the myelinated internode portion of the axon. So I think we're just mixed up here - it's the difference between resistance through the metal wire and the resistance of the rubber insulation - membrane resistance increases with myelination allowing effective axon "wire" resistance to decrease. SamuelRiv (talk) 16:48, 14 June 2010 (UTC) Are you saying that Sodium current intake (physical displacement) is followed by an electronic exchange (No displacement) with the next node? If so, then you have some problems: 1/ The next node becomes positive with this exchange and the sodium intake at this site will do not happen. 2/ The electric current you create with this electronic exchange has not the good direction. 3/ If an electronic exchange exists, when does it start and when/where does it stop? 4/ The worst problem remains the law of the least resistance. An electric current flows following mostly the least resistance and because AP uses only a very tiny quantity of ions in presence then there is not enough current that flows to the next node triggering another AP. --Somasimple (talk) 05:04, 15 June 2010 (UTC)

                    This page is not a forum for general discussion about Action potential. Any such comments may be removed or refactored. Please limit discussion to improvement of this article. You may wish to ask factual questions about Action potential at the Reference desk, discuss relevant Wikipedia policy at the Village pump, or ask for help at the Help desk.

                    Everyone know this limitation. Does that mean that errors must NOT be discussed and thus articles, NOT improved?
                    I brought in the previous section a computation that contradicts the notion of velocity improvement by capacitance reduction of myelin. You get any rigth to bring another computation that tells something else or you MUST accept the fact that article formulation is wrong even if it contradicts your actual conviction. Here is a quote at the bottom of the edition page "Encyclopedic content must be verifiable." That seems clear. --Somasimple (talk) 05:06, 12 June 2010 (UTC)

                    The point of the comment above is that the article needs to follow the published literature as directly as possible. If you make objections without pointing to reputable major-league publications that make those objections, it isn't useful. In this case I believe the reason you won't find major publications making this objection is that the assumptions underlying cable theory don't apply to myelinated axons, because the conductance at the nodes is so dominant. Looie496 (talk) 17:45, 12 June 2010 (UTC) I don't understand what you said Looie. Among many other things (like modeling bifurcation and propagation in unmyelinated dendrites), cable theory is used to represent nontrivial capacitance structure which changes the threshold current from the usual "spherical cow" capacitance model in, for example, the original Hodgkin-Huxley. No insulation is perfect, especially not myelin, so I don't see how one can argue that myelination would make such corrections in threshold current inapplicable, but maybe it's because I do theory? SamuelRiv (talk) 18:17, 13 June 2010 (UTC) Let me try again. Cable theory says that signal propagation is determined by two key parameters, the time constant and the length constant. But in a myelinated axon, the distance between nodes is a small fraction of the length constant, which means that the assumptions of cable theory don't apply and therefore the cable theory time constant is irrelevant. The fraction of current that flows through the myelin is too small to matter it is dominated by the current that flows across the membrane at the nodes. At least that's my understanding. Looie496 (talk) 19:04, 13 June 2010 (UTC) I didn't read the full conversation beforehand. I hope I cleared up confusion for OP in my response in the previous section. Now, I'm not sure how general a term Cable theory is, but I would suspect it's applicable everywhere, though in the axon once the proper approximations are made I'm sure you get to ignore most of it as you say above. I was thinking about it in terms of other areas, so yeah, you're right. SamuelRiv (talk) 23:09, 13 June 2010 (UTC)
                    Thanks Looie for this explanation.
                    Since an axon is a 3 dimensional thing (a cylinder,)
                    Since electrical propagation is omni-directional,
                    Since the external milieu has a lower resistance than the axolemna
                    Since the electric law of the least resistance implies and orders a shorter circuit (any node of every axon that is closer than the next node of the active one). Remember that axons do not travel alone but packed in nerves.
                    Then, there is NOT a chance that the current flows to the following node. It is to far. Here you try to limit the theory to a longitudinal propagation where Electricity has not this limitation. --Somasimple (talk) 05:31, 14 June 2010 (UTC)

                    From the Peak and Falling Phase" section:

                    However, the same raised voltage that opened the sodium channels initially also slowly shuts them off, by closing their pores the sodium channels become inactivated. This lowers the membrane's permeability to sodium, driving the membrane voltage back down.

                    How? If some sodium is still flowing into the cell, the membrane voltage would continue to go up. Wouldn't it be the rate of increase that goes down. And if the sodium flow is blocked completely, then how does this change the voltage at all? If the only thing driving down the voltage is the potassium outflow, then the last part of the quoted statement is misleading and needs to be fixed.

                    The relationship between ion movement and voltage is not as direct as you apparently think. It is possible to have ion flow without any change in membrane potential, and it is possible to have a change in membrane potential without any ion movement. The rules that govern membrane potential are outlined in the membrane potential article -- this is complicated stuff, though, and it might be better to consult a textbook. Looie496 (talk) 06:37, 12 January 2011 (UTC) Agreed, it's a complicated subject and perhaps there is a critical concept I have yet to understand. But just for the record, if you got the impression that I was equating sodium flow with membrane potential, that's incorrect. I was only talking about sodium flow's individual contribution to membrane potential. Thanks for your input though. I'll check that link out. (talk) 22:10, 12 January 2011 (UTC) 184, I think you raise a valid point. The problem is with the imprecise "up/down" language, and I'll fix it on the page. As Looie said, the information is correct, but I have to admit that it is worded less helpfully than it could have been. Thanks! --Tryptofish (talk) 20:14, 12 January 2011 (UTC)

                    Let me leave a note that I'm going to try to do some serious work on this article. The main thing I've done so far is to move a bunch of material to the membrane potential article, so that this article doesn't repeat a lot of stuff that more properly belongs there. It needs instead to have a detailed discussion of voltage-gated ion channels and their effects on membrane potential. Looie496 (talk) 19:17, 14 October 2011 (UTC)

                    Hey everyone! I did a lot of writing on this article in the distant past - half a dozen years ago or so, and kind of moved on to other things. I'm happy to see all that has been done to it since then! The article is much improved in many ways. I was kind of astonished when I scanned the history to see just how many changes have been made and how many people have made contributions. Having said that, having skimmed through the present article, I feel like there is still room for improvement, which seems kind of hard to believe, given how many people have toiled over this for the past few years. I'm a little hesitant to jump back in. One of the reasons I hesitate is that don't really want to 'undo' any of the great things that have been done, but there is so much history, I can't take it all in. I've not re-read the whole article in detail yet, and of course I would do that before I proposed any changes. But the introduction, in particular seems kind of muddy to me. I feel like a naive reader could get through the first couple of paragraphs and still not have any idea of what an action potential is. There are also things in the intro paragraphs that are basically inaccurate. The trouble is that the 'inaccuracies' are more in the technical detail rather than in concept. This may be appropriate for the intro paragraphs. For example, the intro talks about how the membrane potential "rises" during the action potential, when really, during most of the "rising phase" of the AP, the membrane potential is approaching zero. It is more accurate to say that the membrane is 'depolarizing', although even this only describes it's relationship to membrane potential up until the rising phase crossed 0 volts (after which it's then polarizing again, but in the opposite polarity). In the sense that during the rising phase of the action potential, the membrane potential is moving in a positive direction, it could be said to be 'rising'. So it's not wrong, it just seems. muddy. In the second paragraph, it says: "(ion channels) rapidly begin to open if the membrane potential increases to a precisely defined threshold value." This is just wrong. The threshold does not determine when ion channels open. It's the other way around. The probability that a channel will be open as the membrane potential changes, determines, in part, the threshold for the action potential. The relationship between membrane potential and channel open probability is a smooth curve without threshold. What really determines the threshold of the action potential is the balance between sodium and potassium current. At the membrane potential where the sodium current exceeds the potassium current, depolarization of the membrane becomes regenerative (i.e the AP threshold is the membrane potential where INa > IK). The state of the channels determines the threshold, not the other way around. Even to say that the threshold is "precisely defined" is wrong, at least in the sense that the membrane potential value of threshold is precisely defined. The membrane potential value of threshold changes all the time, depending on the recent history of the membrane potential (e.g. the refractory period is basically a change in AP threshold). The threshold *IS* precisely defined in terms of it being at the precise membrane potential where INa exceeds IK. I had a fairly detailed explanation of this is a long-ago version of this article, but it's been long-since removed. I presume the reason that it got taken out was that it was too technical. I appreciate that the article needs to be readable by a large audience and thus probably shouldn't get too technical, but does it have to be dumbed down to the point where it's not correct? Do you think there might be a way to have it be both understandable and correct?

                    Well, there's huge room for improvement, no doubt about it, and I hope you will feel free to work on the article. I'm not keen on using "depolarize" in place of "rise". Success for this article means getting the reader to have a visual image of what happens during an action potential, and the word "rise" is a lot more visually evocative than "depolarize". Regards, Looie496 (talk) 14:48, 23 October 2011 (UTC) A lot of us, myself included, would like to make the lead more accessible to the general reader, but also find the task a bit daunting. As for threshold, it's true that it's the threshold for the action potential itself, rather than of the ion channels, but nonetheless voltage-sensitive ion channels have precise voltages at which they start to open (and below which they do not open), and those "thresholds" generate the threshold of the action potential. --Tryptofish (talk) 19:49, 24 October 2011 (UTC)

                    So I would, maybe not so much as dispute that, but modify it a bit. I find it more useful to think of the relationship between voltage and channel opening in terms of probability. The actual functions that describe this relationship are exponentials or sums of exponentials, so they don't really have a distinct 'starting point'. Rather, they asymptote as they approach zero probability. So no, they really don't have a threshold or a precise voltage where they open. They have a precise probability for being open at a given voltage - and that's different because it's a smooth function without threshold. Even a voltage-gated sodium channel will open every now and aqain, even at a very hyperpolarized potential. As for the threshold of the action potential, it is determined only indirectly by the voltage-dependence of sodium channel opening. The single proximate basis of the AP threshold is the voltage where the sodium current becomes larger than the potassium current. This is, of course, influenced by how many sodium channels are open, but you can't ascribe the threshold solely to Na because it also depends on K. If you made a whole-cell current/voltage plot, you could pick out the threshold precisely as the voltage where the slope of the plot becomes negative. I tried to describe threshold this way (with a diagram) in an earlier version of this article, but it was clearly too technical for people's taste. Synaptidude (talk) 01:24, 25 October 2011 (UTC)

                    . and just in case this horse is still breathing, even though precise, the probability function that describes the relationship between channel opening and voltage is not fixed. It depends on other things, such as the inactivation state of the channel. In the extreme case (and in a population of channels, since a single channel behaves stocastically) the probability of a channel in a population opening can be zero at all potentials, if they are all inactivated. So the probability that sodium channels will open at a given voltage depends on the history of the voltage, how long it's been since the voltage changed, etc. So basically, if you want to be accurate, you can't even say that the action potential threshold happens at a precise voltage, because that threshold is changing all the time because of the recent history of the membrane potential. Yes, if you hold the membrane at precisely the same potential for long enough for the channel to reach a steady state, then the threshold will be at the same place every time you test it. The only thing you can say with precision is that the action potential will fire at precisely the voltage where INa > IK - whatever the size of those currents are in a particular set of circumstances. Synaptidude (talk) 05:12, 25 October 2011 (UTC)

                    . and sorry, but in all my verbosity, I forgot the main point I wanted to make. Because the threshold for the action potential is at the point where INa becomes larger than Ik, the sodium current can actually grow quite large before the threshold is reached. So even if you wanted to (incorrectly ) say that sodium channel opening has a threshold, the threshold for the action potential occurs at some votage-distance from that 'threshold'. Obviously, the larger is Ik the farther along the voltage scale, and thus the farther from the sodium channel 'threshold', is the threshold for the AP. Thus, even if there was a true threshold for sodium channel opening, it is not directly related to the action potential threshold.

                    Now the question is, can we find a way to accurately describe the threshold without confusing everyone. Synaptidude (talk) 05:28, 25 October 2011 (UTC)

                    As an electrophysiologist myself in real life, I partly enjoy these kinds of discussions, but for Wikipedia's purposes, we are writing for the non-specialist general public, and one can over-think these things. It's important to be accessible. Poor horse! --Tryptofish (talk) 14:20, 25 October 2011 (UTC) Yes. Our audience here is not neuroscience students, much less neuroscience professionals -- they have much better sources of information. If this article is not accessible to "outsiders", it serves no purpose. Looie496 (talk) 15:02, 25 October 2011 (UTC) I don't disagree with that. But one pet peeve I often have with scientific writing for the lay public, is that it in the quest to make it accessible, it is made inaccurate. All I'm saying is that we should strive to make it both accessible AND accurate. The stuff I wrote about above is obviously too advanced for this article. That's why I'm having this discussion in the talk with you experts, so we can agree on the 'truth' before we agree on the presentation in the article. What is, after all, the point of making it understandable if the understanding is wrong? I'm sure if we put out heads together, we can figure out a wording that will be accessible and correct. Synaptidude (talk) 17:22, 25 October 2011 (UTC) Good, I think we actually all agree about that. --Tryptofish (talk) 17:35, 25 October 2011 (UTC) Great! Now I'm going to go back on it, just a little ). I just want to float an idea with you guys. What if we just alter the wording in the intro, on the subject of threshold, just a little bit to make it accurate, and then include farther down in the article or more technical description of threshold? I think that it could be made completely accurate and accessible kind of on a 'Scientific American' level challenging, but not impossible. That way, those who just want, and can grasp, the simple explanation get that right up front, and those who want more detail can get that too. Do you think that would be consistent with the mission of WikiPedia and useful as well? Just a thought - interested in your opinion. Synaptidude (talk) 18:01, 25 October 2011 (UTC) I'm fine with that. I'm also better at taking the electronic equivalent of a red pen to something that's already written, than I am at visualizing what this will look like before a first version is written. So I'd say WP:BE BOLD and go for it with the understanding that you can't break anything, and anything you write will end up getting changed by me and others in any case. --Tryptofish (talk) 18:08, 25 October 2011 (UTC)

                    I'd like to suggest a reorganization of this article to make it more readable and less repetitive. I'm prepared to do it over the next few weeks myself or with help, if there are no objections. In my eyes, there are 3 parts to this article, and most of my suggestions are for the 2nd.

                    1. The Lead/Overview - a lot of entries in the Talk agree this needs to be changed to be more coherent and accessible to the lay reader. I think we should make the 2nd paragraph of the lead a much shorter description, just the basic idea of what it means for an AP to be an AP (I know, harder than it sounds). The info currently in the 3rd paragraph should be later in the article - in it's place, we could put a paragraph that takes a quick introductory walk through the later sections. The Overview is okay, but I think voltage changes and threshold potential will make more sense if the explanation walks the reader through an image like Figure 1A, although one that is a little clearer. That is usually how the action potential is taught, with constant reference to a graph.

                    2. Current sections 2 through 6 - Here's where the article is a bit messy. How I think it could be organized:

                    Biophysical basis Phases Propagation Termination

                    I think Phases should be included in the Biophysical Basis section. Besides including a lot of similar information, the phases are described using the same mechanisms that are being talked about in 'Biophysics'. And the 'Biophysics' section currently has no structure - going through mechanisms phase-by-phase would give it that. The new section would have general information up front, then subsections for each phase.

                    The Neurotransmission section should be removed, and its contents sorted into the other sections. First of all, neurotransmission is about the release and reception of neurotransmitters - this is related to APs and should be referred to, but that can go in the 'Termination' section and be primarily links to relevant articles. Second, a lot of what's in this section rambles about things other than neurotransmission anyway. I'm not suggesting any particular content be removed, only moved. Some of that will be clear, some not - I don't know where the bit about sensory neurons and pacemaker potentials should go, though I do agree they should be in the article.

                    3. The miscellaneous sections - I have no issue with their organization.

                    So broadly speaking, the changes are to fix the beginning of the article for the lay reader, and then reorganize the middle sections so that they walk through the AP from how it starts, to how it moves, to what it does when it gets where it's going.

                    Twodarts (talk) 02:21, 18 December 2011 (UTC)

                    Go for it! One of the things I have tried to do, in the limited work I've done on this article, is to leave most of the basic biophysics for the membrane potential article, and focus this article on the biophysics that are specifically relevant to excitability. But if you're interested in doing major work on the article, you should feel free to do whatever seems appropriate to you. The article is definitely full of redundancy and extraneous material at the moment, so I think you should feel free to get rid of stuff if you don't think it belongs. Regards, Looie496 (talk) 17:29, 18 December 2011 (UTC)

                    The article states (emphasis mine):

                    "To be specific, myelin wraps multiple times around the axonal segment, forming a thick fatty layer that prevents ions from entering or escaping the axon. This insulation prevents significant signal decay as well as ensuring faster signal speed. This insulation, however, has the restriction that no channels can be present on the surface of the axon. There are, therefore, regularly spaced patches of membrane, which have no insulation. These nodes of ranvier can be considered to be 'mini axon hillocks', as their purpose is to boost the signal in order to prevent significant signal decay."

                    First it says that the insulation prevents signal decay. Then it says that it's the gaps in the insulation that prevent signal decay (i.e., it's not the insulation itself), which implies to me that the insulation may even contribute to the decay (or why else would there be signal boosters needed?). Could someone do a bit of rewrite to clarify the intended meaning here? DMacks (talk) 06:19, 12 January 2012 (UTC)

                    From what I understand, all-or-none signals should be digital. Unless I'm missing something here.--Miracleman123 (talk) 06:58, 4 July 2012 (UTC)

                    I don't think either "digital" or "analog" encompasses the full truth. The amplitude is essentially all-or-none, but the waveform is smooth and the timing is not discretized. Looie496 (talk) 16:10, 4 July 2012 (UTC) This question has come up before, and I wonder whether we should simply delete the description as "analog" (in other words, say nothing, neither analog nor digital). As Looie correctly says, action potentials are continuous variations in the value of the membrane potential, and therefore, their waveform shapes are analog, rather than digital bits. (In fact, strictly speaking, even their amplitudes can vary, depending on the resting potential that serves as a baseline. But that's not the same thing as graded subthreshold potentials.) What is all-or-none is whether they occur or not. They are nothing like what we generally consider to be digital signals, so I think that it's technically correct to describe them as analog. But it gets real confusing to say that in the same sentence that calls them "all-or-none". I've just moved the link to analog signals to another place on the page. Is this better? --Tryptofish (talk) 19:43, 4 July 2012 (UTC) Also the "purely" digital signals in electronics have a waveform ascending and descending and slight voltage variations,that limit their usability (think of high temperatures where chips start behaving erratically. The notion that an action potential has a definite waveform is irrelevant to it's digtial character. Only the frequency of action potentials passing by contains the relevant information from for instance a sensory organ.Viridiflavus (talk) 13:18, 13 January 2013 (UTC)

                    I popped in here because this article was in an error category (invalid LCCNs) and one thing I immediately noticed was that the references were /very/ messy. For one thing, only about half the books in the 'bibliography' section were actually cited, and there were a number of books that weren't in that section. I've been doing quite a bit of work on reorganizing how they are laid out, with the goal of trying to get them all into some kind of 'uniform' appearance, and laid out in a way that's actually useful.

                    Though I am changing the format of the book references to use |ref=harv, it's not out of any intention to violate CITEVAR, or force something like list-defined references on the article. the format as it existed was, like I said, very confused, and moving the books into a separate section and using <> and <> seemed like the best way to hammer this into something more usable, and less messy.

                    I would ask, though, that if there is a problem with how I'm doing this, you just poke me and say 'hey dummy', do this instead. I'm not changing the content, but I think where I am now would be a better 'starting point' to get this to something decent that the regular content editors of this can deal with (and that's not ugly) than where it was, even if it means moving in a different direction. If anyone wants to comment, please do so. Revent talk 11:17, 27 August 2014 (UTC)

                    Just to make it extremely clear, I'm being very careful to not damage the referencing, checking each edit multiple times, every citation 'points' at exactly the same source, it's just the formatting of the reference section (and completing the metadata on all the references) that I'm changing. I am using harvard references for the books, but the same 'visual' change can be done without that if people want me to change it back, it was just easier for sorting out the 'bundling'. Revent talk 02:26, 28 August 2014 (UTC) Ok, I'm finally done sorting all the books out. this is actually, for the books, more like the 'original' citation format, which was 'manual' short footnotes and a list just now they are actually using the template. I think it would make sense (especially for editability in parts) to also do the same thing with the journals, but I'm not going to do that unless people express that they want me to. If that does turn out to be desirable, please ping me and I will do it. Revent talk 07:37, 28 August 2014 (UTC)

                    Is it accurate to understand that an action potential is what can happen at a place, position or point on a cell's membrane, as indicated or measured by a point probe, rather than the succession of AP along, say, a neuron's axon? That is, that an AP is not the traveling of an event (the "spike train"?), but just the occurrence of the voltage event at a point?

                    If so, could it be appropriate to amend and add to the first sentence in the intro, from, "In physiology, an action potential is a short-lasting event in which the electrical membrane potential of a cell rapidly rises and falls, following a consistent trajectory.",

                    to, "In physiology, an action potential is a short-lasting event at a position in a cell in which the electrical membrane potential rapidly rises and falls, following a consistent trajectory. An action potential at one position may initiate another following action potential in a nearby continuous part of the membrane, such that an impulse signal made up of a sequence of action potentials travels along the cell membrane." ? (Without the boldface used here to make the phrase stand out, and 'spike train & impulse struck out and signal replaced impulse.)UnderEducatedGeezer (talk) 03:53, 26 June 2016 (UTC)

                    What I'm trying to get at is, which of these two is most like an AP: 1. a CANNON FUSE, burning from where it's lit, then on along its whole length & finally to its end where it causes something to explode, or, 2. a single POINT on a cannon fuse which point was initially not burning but is ignited by a burning point just before it and then it burns out? UnderEducatedGeezer (talk) 02:37, 29 June 2016 (UTC) This is a very good question that took me aback, as I can immediately see how this can be so non-obvious. I think your analogy of the canon fuse is not a good one. The AP itself happens as soon as the neuron is stimulated (i.e. due to ion channels opening suddenly somewhere on a dendrite causing a sudden rush of ions, sorta like blowing the hatch on a spaceship) (though of course in some exceptions it doesn't have to be stimulated at a point or externally). This sudden rush of ions makes the charge at that particular dendrite of the neuron different from the undisturbed other end(s) of the neuron (in the canonical pyramidal neuron, this would be the axon). That charge difference is "felt" as an electric field inside the neuron and can be measured at any point inside the neuron – if the cell is relatively simple that field alone will be enough to trigger the release of neurotransmitters at the end of the axon (or whatever signalling body). That release is probably what you are thinking of as the cannon "blast", and the propagation of the electric field is what you are thinking of as the "fuse". The electric field in the neuron indeed is a bit messy/slow to propagate since the "wire" along which it travels is just a soup of ions held together by a rather leaky membrane, but that is not a good analogy to a "fuse". I should add that while the strength of the electric field as it propagates matches the shape of the voltage of the original action potential, it should not itself be called an action potential at any point inside the cell, even though the propagation of the electric field is often referred to as the "propagation of the action potential" (with the important exceptions at the Nodes of Ranvier, where new APs are triggered to "boost the signal", so to speak, and probably other exceptions as well that I don't recall because there's always exceptions). In summary, in the most basic case, the AP itself happens once at the point at which a neuron is stimulated a charge difference is created between the stimulated end of the neuron and the signalling end, which is expressed as an electric field that "propagates" in a lossy manner in the ion soup of the cell (not an AP) and the charge difference felt at the signalling end causes it to release neurotransmitters into the synapse to signal the next neuron (not an AP). SamuelRiv (talk) 20:40, 30 June 2016 (UTC) @SamuelRiv, Looie496, Tryptofish, and Lova Falk: Thanks Samuel, for your lengthy response, and for putting it up under my real AP question, I appreciate it! However, I'm none the less deeply confused by your answer, so I have some questions about it. And please bear in mind that my nom-de-plume is accurate, I am under-educated (& it could be said that I'm slow, too)! (& I added notification to Looie496 & Tryptofish & Lova Falk in hopes for any possible further info on the subject.) You said, "The AP itself happens as soon as the neuron is stimulated. ". As far as I understand, some stimulations do not cause an AP, because their total contributions to the potential at the axon hillock are too small. Since the AP in a neuron is essentially what happens after neurotransmitters cause ligand-gated pores to open allowing a rush of ions into the neuron, which then passively spread along the neuron membrane (electrotonus), down the dendrites, across the soma, and toward the axon hillock, getting weaker as they spread out, and the AP will then at the point of the axon hillock either happen or not happen, depending on the strength/quantity of the input(s), I cannot see how the AP can happen as soon as the neuron is stimulated. The typical waveform describing an AP often shows small voltages that have reached the axon hillock which do not reach the necessary trigger voltage & consequently fail to initiate an AP there. Is this understanding not somewhat accurate? Here is a graph which shows failed initiations of the AP (although the indication of the refractory period is not quite right, I think): UnderEducatedGeezer (talk) 02:45, 4 July 2016 (UTC) Now, your suggestion that the rush of ions into the neuron from stimulation causes a difference in potential between that initial entry point and the axon terminals at the end of the neuron makes sense to me, but since the movement of that potential is passive & graded, and if I'm right consequently diminishes with distance, even though it can be presumably be measured at the axon terminals, I would think that by itself without an active propagation of a stimulated action potential, it would be vanishingly small & almost always too small to open voltage-gated calcium pores at those endings to allow the release of neurotransmitter vesicles into a synapse. Am I misunderstanding something here? UnderEducatedGeezer (talk) 03:10, 1 July 2016 (UTC) And here is one site that leads me to think that the AP is a point event that is then propagated, and also uses a kind of a 'fuse' analogy UnderEducatedGeezer (talk) 21:25, 3 July 2016 (UTC)

                    Is a 'spike train' the succession of action potentials along axon (like a gunpowder fuse burning from start to finish along the length of the fuse), or a rapid repeated firing of the neuron itself (like a machine gun firing some number of rounds one after another rapidly from one trigger pull)?UnderEducatedGeezer (talk) 04:00, 26 June 2016 (UTC)

                    It is the latter. If this is unclear then it should definitely be spelled out in the article. SamuelRiv (talk) 00:32, 30 June 2016 (UTC) Thanks. I kinda thought it was as you said, such as it meaning repeated outputs over a short period of time, and the article may or may not actually be unclear, depending on my question above about what actually is the AP, point event or linear one. Current related text in article says, ". the temporal sequence of action potentials generated by a neuron is called its "spike train" (bolding added). Now, if an AP is the whole event like the fuse in example above burning from start to end, then 'spike train' would clearly be repeated events in time of 're'burnings (or resetting) of the fuse, ie, repeated signals. But if AP is the event at a point on the neuron membrane manifesting the typical waveform of resting, rise, maximum, falling, overshoot, & return to resting potential, then the 'spike train' would be sequential AP's along the length of the axon over a (very short) period of time, or just one 'signal'. So even though I thought 'spike train' really referred to a sequence in time of repeated signals, as you said, it still seems that both from some other readings and at least quasi-logic, an AP would just be the measured event at a point (unless people are using the term for both events, the measured response at a point and the sequential firings of that AP along the whole length of an axon?).UnderEducatedGeezer (talk) 02:55, 30 June 2016 (UTC)

                    I have just modified 3 external links on Action potential. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

                    When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at <> ).

                    As of February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot . No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template <> (last update: 15 July 2018).

                    • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
                    • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

                    I have just modified one external link on Action potential. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

                    When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

                    As of February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot . No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template <> (last update: 15 July 2018).

                    • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
                    • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

                    Recently a video explaining the action potential has been deleted (and it was the case for MANY medical articles on the same day). I do not know why it has been deleted but I am guessing that it was considered too "simple". I will not undo this deletion since I don't feel I'm entitled to but it does raise the question : are wikipedia article made for the people already in the medical field or for everyone ? If the answer is the former, it would be disappointing but I would understand the video deletion. If it's the latter. why has the video been removed? Simplification will always be made (even when expert talk to each other). I do not understand this choice that goes against a popular use of wikipedia. — Preceding unsigned comment added by Alouzi (talk • contribs) 20:50, 16 April 2018 (UTC)

                    @Alouzi: Those videos were not removed for that reason, and Wikipedia is written for the general public. What happened with those videos is that, following a very extensive discussion, editors decided that they violated some Wikipedia policies (and some of them also contained factual errors). You can see the discussion at Wikipedia:WikiProject Medicine/Osmosis RfC. --Tryptofish (talk) 22:35, 16 April 2018 (UTC) @Tryptofish: Thank you for your answer, it is greatly appreciated. — Preceding unsigned comment added by Alouzi (talk • contribs) 13:54, April 17, 2018 (UTC)

                    I have temporarily reverted a very large addition to the section on plant action potentials. For reference, here it is:

                    Plant action potentials Edit

                    Plant and fungal cells [a] are also electrically excitable. The action potential observed in vascular plants is better observed than those of vegetative [1] [2] because the diffusion of electrical signals occurs primarily in the phloem sieve tube – a distinctive characteristic of higher plants [3] . [4]

                    The general progression of plant action potentials is the same as animal action potentials, however, plants possess alternate mechanisms.

                    Resting Phase Edit

                    Plant cells are commonly observed to have more negative resting membrane potentials and rising phase membrane potentials. For example, the Dionaea’s resting membrane potential is approximately -120mV [5] , whereas neurons are regularly between -40mV to -90mV [6] .

                    To attain understanding regarding plant action potentials, Opritov et al. recorded the electric potentials of maize leaves. To do so, they cut the leaf accordingly to allow aphids to attach for a long period to feed in efforts to expose the sieve. Once exposed, the researchers removed the aphids carefully with a laser to access the contents released by the leaf. This liquid-like substance was then measured with a microelectrode that was previously calibrated with a control of water. [7] The recorded values were similar to those that were expected when reviewing a study of Mimosa pudica [8] which indicated that the resting membrane potential measured was significant.

                    Stimulation and Rising Phase Edit

                    Stimulation also induces action potentials within plant cells, the most commonly mentioned stimulation is touch [5] . Unlike animals, the plant’s action potentials will not register any information regarding the characteristics of the interaction. [2] Upon stimulation, the depolarization in plant cells is not accomplished by an uptake of positive sodium ions, but but rather the influx of calcium. [4] Logically, one can understand the plant’s lack of dependence on sodium ions to initiate depolarization because too many sodium ions lead to detrimental outcomes. [9] Together with the following release of positive potassium ions, which is common to plant and animal action potentials, the action potential in plants infers, therefore, an osmotic loss of salt (KCl), whereas the animal action potential is osmotically neutral, when equal amounts of entering sodium and leaving potassium cancel each other osmotically. The interaction of electrical and osmotic relations in plant cells [b] indicates an osmotic function of electrical excitability in the common, unicellular ancestors of plants and animals under changing salinity conditions, whereas the present function of rapid signal transmission is seen as a younger accomplishment of metazoan cells in a more stable osmotic environment. [10] It must be assumed that the familiar signalling function of action potentials in some vascular plants (e.g. Mimosa pudica) arose independently from that in metazoan excitable cells.

                    Peak Edit

                    As calcium influxes towards the cytoplasm, they activate calcium-dependent anion channels, causing negatively charged ions, like chloride, to flow out of the cell thus further depolarizing the membrane. Similarly to the resting membrane potentials of plants and animals, the peaks correspond in a similar manner: they are commonly more negative. Dionaea’s action potential usually maximizes at -20mV, approximately 60mV less than an average nerve cell. [3]

                    Falling Phase and After-hyperpolarization Edit

                    Unlike the rising phase and peak, the falling phase and after-hyperpolarization seem to depend primarily on cations that are not calcium. To initiate repolarization, the cell requires movement of potassium out of the cell through passive transportation on the membrane. This differs from neurons because the movement of potassium does not dominate the decrease in membrane potential In fact, to fully repolarize, a plant cell requires energy in the form of ATP to assist in the release of hydrogen from the cell – utilizing a transporter commonly known as H+-ATPase. [7] [3]

                    Although there is a lot of debate regarding the refractory period of a plant cell, what is not up to speculation is the fact that their refractory periods are much longer than those in animals, [8] and that in order to fire and action potential again, they require more sources for electrical current. [3]

                    Although animals and plants both possess action potentials, those of plants are often overlooked or ignored due to the plants’ lack of nerves and nervous system. The deficiency of a brain or a specified location to integrate information makes it difficult to believe that action potentials of plants create a response however, plants definitely do perceive stimuli (without information regarding it) that can develop into an (generic) effector response. [2]

                    1. ^ Holsinger, Kent E. “Holsinger.” Reproductive Systems and Evolution in Vascular Plants, vol. 97, no. 13, 20 June 2000, pp. 7037–7042.
                    2. ^ abc
                    3. Pyatygin, S. S. (February 13, 2007). "Signaling Role of Action Potential in Higher Plants" (PDF) . Russian Journal of Plant Physiology 2008. 55: 312–319.
                    4. ^ abcd Hedrich, Rainer, and Erwin Neher. “Venus Flytrap: How an Excitable Carnivorous Plant Works.” Trends in Plant Science, vol. 23, no. 3, Mar. 2018, pp. 220–234.,
                    5. ^ ab Hedrich, Rainer. “Ion Channels in Plants.” Physiology, vol. 92, Oct. 2012, pp. 1777–1811., doi:10.1152.
                    6. ^ ab Hedrich, Rainer, and Erwin Neher. “Venus Flytrap: How an Excitable Carnivorous Plant Works.” Trends in Plant Science, vol. 23, no. 3, Mar. 2018, pp. 220–234.,
                    7. ^ Purves D, Augustine GJ, Fitzpatrick D, et al., editors. Neuroscience. 2nd edition. Sunderland (MA): Sinauer Associates 2001. Electrical Potentials Across Nerve Cell Membranes.Available from:
                    8. ^ ab Opritov, V A, et al. “Direct Coupling of Action Potential Generation in Cells of a Higher Plant (Cucurbita Pepo) with the Operation of an Electrogenic Pump.” Russian Journal of Plant Physiology, vol. 49, no. 1, 2002, pp. 142–147.
                    9. ^ ab Fromm, Jörg, et al. “Electrical Signaling along the Phloem and Its Physiological Responses in the Maize Leaf.” Frontiers in Plant Science, vol. 4, no. 239, 4 July 2013, pp. 1–7., doi:10.3389/fpls.2013.00239.
                    10. ^ Pardo, Jose M, and Francisco J Quintero. “Plants and Sodium Ions: Keeping Company with the Enemy.” Genome Biology, vol. 3, no. 6, 24 May 2002, pp. 1–4.,
                    11. ^ Gradmann, D Mummert, H in Spanswick, Lucas & Dainty 1980, Plant action potentials, pp. 333–344. harvnb error: no target: CITEREFSpanswickLucasDainty1980 (help)

                    In part, there are formatting problems, but I also am concerned that this material fails WP:DUE. Plants just aren't that big a part of the topic, and it seems to me to be inappropriate to have so many subsections that recapitulate the descriptions of action potential stages higher on the page. --Tryptofish (talk) 22:33, 22 May 2018 (UTC)

                    A lot of it seems to have been reincorporated at Action_potential#Plant_action_potentials, and I added back a bit more. Specifically, the resting potential values for plants vs animals and some more specific info on the role of potassium in the post-peak phases. — Wug·a·po·des​ 00:09, 2 May 2021 (UTC)

                    Chemical biology: probes and therapeutics

                    MBoC is pleased to publish this summary of the Minisymposium 𠇌hemical Biology: Probes and Therapeutics” held at the American Society for Cell Biology 2011 Annual Meeting, Denver, CO, December 4, 2011.

                    In the Chemical Biology Minisymposium, cochaired by Alice Ting and Lisa Belmont, the first three talks focused on the development of novel probes for cell biology, while the last three presentations highlighted inhibitors of proteins previously believed to be “undruggable.”

                    Daniel Hochbaum (Cohen lab, Harvard University) described an optical probe for imaging single action potentials using the fluorescence of a rhodopsin protein, Archaerhodopsin 3 (Arch), expressed in cultured rat hippocampal neurons. This voltage indicator exhibited a 10-fold improvement in sensitivity and speed over existing protein-based voltage indicators, with a twofold increase in brightness between � mV and +150 mV and a submillisecond response time. Arch detected single electrically triggered action potentials with a signal-to-noise ratio > 10. The mutant ArchD95N lacked endogenous proton pumping and showed 50% greater sensitivity than wild-type. Although it had a slower response (41 ms), ArchD95N resolved individual action potentials.

                    Alice Ting (MIT) presented a novel approach to determining the proteomic composition of subcellular compartments by targeting a promiscuous biotin-conjugating enzyme to subcellular regions. The labeled proteins were enriched and identified by mass spectrometry. This approach was used to determine the proteomes of mitochondria and the endoplasmic reticulum of live mammalian cells without using subcellular fractionation. Katie White, a graduate student in Ting's lab, then described improvements to in vivo fluorophore labeling using mutants of lipoic acid ligase (LplA). She engineered LplA to accept a blue coumarin fluorophore and then used yeast display evolution to evolve LplA into a probe ligase with high activity in the secretory pathway. The LplA variants allowed imaging of intra- or intercellular protein–protein contacts.

                    To expand the scope of bioluminescence imaging, Stephen Miller (University of Massachusetts Medical School) developed new aminoluciferin substrates for firefly luciferase that emit light at longer wavelengths than d -luciferin. Although these substrates were initially limited by product inhibition, this could be ameliorated by mutation of luciferase. Moreover, mutant luciferases were identified that displayed selectivity for these synthetic substrates over d -luciferin. This system has two advantages: it is potentially better suited for in vivo imaging, because tissue is more transparent to light at longer wavelengths, and orthogonal luciferase–luciferin pairs could allow multiplexed bioluminescence imaging.

                    An NMR-based fragment screen for inhibitors of the Ras oncoprotein presented by Guowei Fang (Genentech) identified 25 compounds that inhibit Ras with two distinct mechanisms of action. One class of compounds binds to a small pocket between Switch I and Switch II that expands to accommodate the inhibitor. These compounds competitively inhibit nucleotide exchange by blocking the interaction of Ras GDP with its nucleotide exchange factor, SOS. The second class of compounds binds in a pocket created by the interface of the Ras–SOS complex and acts by accelerating nucleotide release.

                    Corey Nislow (University of Toronto) highlighted the power of yeast genetics by screening cell-active, structurally diverse compounds against 1100 heterozygous strains (haploid for essential genes) and 5000 homozygous deletion strains. He identified 55 novel drug targets, including Sec14 and septin. Sec14 coordinates lipid biosynthesis with signaling pathways and was thought to be undruggable. He then showed how chemical genetic profiling revealed that elesclomol, a compound demonstrating efficacy in metastatic melanoma, inhibits the electron transport chain. These approaches will continue to bear fruit as additional targets are confirmed and similar technology is applied to Candida albicans and other organisms.

                    Lisa Belmont described a model for synergy between paclitaxel and navitoclax, a Bcl-2/Bcl-xL inhibitor, in which cells in mitotic arrest slowly degrade Mcl-1, while navitoclax causes acute inhibition of Bcl-xL. Across 50 cancer cell lines, cells with high levels of Bcl-xL relative to Mcl-1 exhibited lowered paclitaxel response and higher paclitaxel/navitoclax synergy. The cell line synergy translated to xenograft studies, and analysis of ovarian cancer tissue from paclitaxel-treated patients demonstrated that high Bcl-xL predicted poor response to paclitaxel. Taken together, the data suggest the paclitaxel–navitoclax combination might be effective in cancers expressing high levels of Bcl-xL.

                    Watch the video: Το άλυτο μυστήριο με τον σταθμό UVB76 και το μήνυμα που δεν καταλαβαίνει κανείς!!! (August 2022).