We are searching data for your request:
Upon completion, a link will appear to access the found materials.
The ratio of M and L cones varies greatly among different people with regular vision (e.g. values of 75.8% L with 20.0% M versus 50.6% L with 44.2% M in two male subjects).
This made me wonder if other people then can perceive colors more or less green/red than me. But then I asked myself why do I perceive all colors similarly for both objects on my sightline and at sides (except 90+° from my sightline horizontally). Is that because the ratio of cone cells is similar on the whole retina (in one individual)?
Shape matters: the relationship between cell geometry and diversity in phytoplankton
Size and shape profoundly influence an organism’s ecophysiological performance and evolutionary fitness, suggesting a link between morphology and diversity. However, not much is known about how body shape is related to taxonomic richness, especially in microbes. Here we analyse global datasets of unicellular marine phytoplankton, a major group of primary producers with an exceptional diversity of cell sizes and shapes and, additionally, heterotrophic protists. Using two measures of cell shape elongation, we quantify taxonomic diversity as a function of cell size and shape. We find that cells of intermediate volume have the greatest shape variation, from oblate to extremely elongated forms, while small and large cells are mostly compact (e.g. spherical or cubic). Taxonomic diversity is strongly related to cell elongation and cell volume, together explaining up to 92% of total variance. Taxonomic diversity decays exponentially with cell elongation and displays a log-normal dependence on cell volume, peaking for intermediate-volume cells with compact shapes. These previously unreported broad patterns in phytoplankton diversity reveal selective pressures and ecophysiological constraints on the geometry of phytoplankton cells which may improve our understanding of marine ecology and the evolutionary rules of life.
Although microorganisms are flexible in their nitrogen (N) and phosphorus (P) content because of their ability to store these nutrients, the mean P:N ratios and their variations indicate surprising similarity among and within microbial species and even with plants and insect herbivores. For example, for marine phytoplankton, the variations in P:N ratio by mass are between 0.04 and 0.30, with an overall average close to the Redfield ratio of 0.14 . An average value of the ratio in terrestrial plants at their natural field sites is in the range 0.07–0.08 [2–4]. Freshwater zooplankton and insect herbivores from terrestrial ecosystems also show similarity, with mean P:N ratios of 0.10 and 0.08 respectively .
Stoichiometric constraints are an important factor in the regulation of microbial growth and in their interaction with the environment. It has been proposed that an increase in the growth rate requires an increase in protein biosynthesis and, therefore an additional allocation of cellular resources to the synthesis of ribosomes and ribosomal RNA, which comprise the main part of cellular RNA. As a result of this allocation, the P:N ratio in the cell increases. This idea is referred to as the growth-rate hypothesis (GRH) [5–7]. In agreement with this hypothesis, the synthesis of rRNA in bacteria is maintained in proportion to the cell's requirement, and the number of rRNA genes correlate with the rate at which phylogenetically diverse bacteria respond to resource availability [8, 9]. In marine bacteria, species-specific growth rates may be estimated by measuring the rRNA content of the organisms .
Ecological studies supporting the GRH usually consider P:N stoichiometry, growth rate, and RNA content across different organisms. A significant effect of the growth rate of an individual unicellular organism on its macromolecular composition has also been well-documented [11–16]. It was found that an impairment of growth in bacteria, yeast, algae, and fungi resulted in a decrease in their total RNA, rRNA, protein contents, and RNA:protein ratio. A direct proportionality between specific growth rate and RNA:protein ratio was reported in Escherichia coli . This finding is in agreement with the GRH, because N and P in RNA and proteins dominate the total amounts of these elements in cells of unicellular organisms . We can conclude from these studies that some molecular mechanisms activated in the cells of an individual organism in response to environmental conditions may be responsible for changes in P:N stoichiometry. At present, however, these mechanisms are not clear. According to studies of translational efficiency in vitro [17, 18], the most rapid bacterial growth demands maximally efficient ribosomes. Because the number of ribosomes is proportional to cellular rRNA and RNA contents , the increased efficiency of protein synthesis by ribosomes under the faster growth reported in these studies may lead to a decrease in the RNA:protein ratio in the cell. However, this view is not in agreement with the GRH and is not supported by the aforementioned studies of macromolecular composition. In addition, studies of ribosomal, messenger, and transfer RNA in yeast during a nutritional shift  show that, although ribosomal RNA is significantly reduced after growth inhibition, messenger and transfer RNA are slightly increased . These observations give indirect support to the notion that synthesis of proteins in microorganisms under growth impairment may require fewer ribosomes than under favorable growth, leading to a decrease in RNA:protein and P:N ratios. This cellular regularity may underlie the effect of growth rate on RNA:protein and P:N ratios at the level of an individual organism. The objective of this study was to investigate this hypothesis and to show that the cellular P:N ratio estimated from the RNA:protein ratio reproduces mean P:N ratios obtained experimentally for diverse microorganisms and plants. We analyzed macromolecular composition, number of active ribosomes, and peptide elongation rate in a set of unicellular prokaryotic and eukaryotic organisms (bacteria, yeast, fungi, algae) as functions of their growth rates. This information was used to calculate P:N stoichiometry of unicellular organisms and the cellular requirement of ribosomes for protein synthesis in different growth conditions.
It was found that the estimated P:N ratios derived from the RNA and protein contents of these organisms were in the same range as mean ratios reported for other, diverse organisms. There was a direct proportionality between the ribosomal requirement for protein production in the cell and RNA:protein ratio, which may underlie this similarity and the positive relationship of P:N ratio with the growth rate of the microorganisms. A decrease in the ribosomal requirement for protein production with an increase in the growth rate could be explained in terms of the disposal soma theory.
Does the ratio between M and L cone cells vary in one indiviual? - Biology
The central dogma hinges on the existence and properties of an army of mRNA molecules that are transiently brought into existence in the process of transcription and often, shortly thereafter, degraded away. During the short time that they are found in a cell, these mRNAs serve as a template for the creation of a new generation of proteins. The question posed in this vignette is this: On average, what is the ratio of translated message to the message itself?
Though there are many factors that control the protein-mRNA ratio, the simplest model points to an estimate in terms of just a few key rates. To see that, we need to write a simple “rate equation” that tells us how the protein content will change in a very small increment of time. More precisely, we seek the functional dependence between the number of protein copies of a gene (p) and the number of mRNA molecules (m) that engender it. The rate of formation of p is equal to the rate of translation times the number of messages, m, since each mRNA molecule can itself be thought of as a protein source. However, at the same time new proteins are being synthesized, protein degradation is steadily taking proteins out of circulation. Further, the number of proteins being degraded is equal to the rate of degradation times the total number of proteins. These cumbersome words can be much more elegantly encapsulated in an equation which tells us how in a small instant of time the number of proteins changes, namely,
where α is the degradation rate and β is the translation rate (though the literature is unfortunately torn between those who define the notation in this manner and those who use the letters with exactly the opposite meaning).
Figure 1: Ribosomes on mRNA as beads on a string (from: http://bass.bio.uci.edu/
We are interested in the steady state solution, that is, what happens after a sufficiently long time has passed and the system is no longer changing. In that case dp/dt=0=βm-αp. This tells us in turn that the protein to mRNA ratio is given by p/m = β/α. We note that this is not the same as the number of proteins produced from each mRNA, this value requires us to also know the mRNA turnover rate which we take up at the end of the vignette. What is the value of b ? A rapidly translated mRNA will have ribosomes decorating it like beads on a string as captured in the classic electron micrograph shown in Figure 1. Their distance from one another along the mRNA is at least the size of the physical footprint of a ribosome (≈20 nm, BNID 102320, 105000) which is the length of about 60 base pairs (length of nucleotide ≈0.3 nm, BNID 103777), equivalent to ≈20 aa. The rate of translation is about 20 aa/sec. It thus takes at least one second for a ribosome to move along its own physical size footprint over the mRNA implying a maximal overall translation rate of b=1 s -1 per transcript.
The effective degradation rate arises not only from degradation of proteins but also from a dilution effect as the cell grows. Indeed, of the two effects, often the cell division dilution effect is dominant and hence the overall effective degradation time, which takes into account the dilution, is about the time interval of a cell cycle, τ. We thus have α = 1/τ.
In light of these numbers, the ratio p/m is therefore 1 s -1 /(1/τ)= τ. For E. coli, τ is roughly 1000 s and thus p/m
1000. Of course if mRNA are not transcribed at the maximal rate the ratio will be smaller. Let’s perform a sanity check on this result. Under exponential growth at medium growth rate E. coli is known to contain about 3 million proteins and 3000 mRNA (BNID 100088, 100064). These constants imply that the protein to mRNA ratio is ≈1000, precisely in line with the estimate given above. We can perform a second sanity check based on information from previous vignettes. In the vignette on “What is heavier an mRNA or the protein it codes for?” we derived a mass ratio of about 10:1 for mRNA to the proteins they code for. In the vignette on “What is the macromolecular composition of the cell?” we mentioned that protein is about 50% of the dry mass in E. coli cells while mRNA are only about 5% of the total RNA in the cell which is itself roughly 20% of the dry mass. This implies that mRNA is thus about 1% of the overall dry mass. So the ratio of mRNA to protein should be about 50 times 10, or 500 to 1. From our point of view, all of these sanity checks hold together very nicely.
Figure 2: Simultaneous measurement of mRNA and protein in E. coli. (A) Microscopy images of mRNA level in E. coli cells. (B) Microscopy images of protein in E. coli cells. (C) Protein copy number vs mRNA levels as obtained using both microscopy methods like those shown in part (A) and using sequencing based methods. From Taniguchi et al. Science. 329, 533 (2010).
Experimentally, how are these numbers on protein to mRNA ratios determined? One elegant method is to use fluorescence microscopy to simultaneously observe mRNAs using fluorescence in-situ hybridization (FISH) and their protein products which have been fused to a fluorescent protein. Figure 2 shows microscopy images of both the mRNA and the corresponding translated fusion protein for one particular gene in E. coli. Figure 2C shows results using these methods for multiple genes and confirms a 100- to 1000-fold excess of protein copy numbers over their corresponding mRNAs. As seen in that figure, not only is direct visualization by microscopy useful, but sequence-based methods have been invoked as well.
For slower growing organisms such as yeast or mammalian cells we expect a larger ratio with the caveat that our assumptions about maximal translation rate are becoming ever more tenuous and with that our confidence in the estimate. For yeast under medium to fast growth rates, the number of mRNA was reported to be in the range of 10,000-60,000 per cell (BNID 104312, 102988, 103023, 106226, 106763). As yeast cells are ≈50 times larger in volume than E. coli, the number of proteins can be estimated as larger by that proportion, or 200 million. The ratio p/m is then ≈2吆 8 /2吆 4 ≈10 4 , in line with experimental value of about 5,000 (BNID 104185, 104745). For yeast dividing every 100 minutes this is on the order of the number of seconds in its generation time, in agreement with our crude estimate above.
Figure 3: Protein to mRNA ratio in fission yeast. (A) Histogram illustrating the number of mRNA and protein copies as determined using sequencing methods and mass spectrometry, respectively. (B) Plot of protein abundance and mRNA abundance on a gene-by-gene basis. Adapted from S. Marguerat et al., Cell, 151:671, 2012. Recent analysis (R. Milo, Bioessays, 35:1050, 2014) suggests that the protein levels have been underestimated and a correction factor of about 5-fold increase should be applied, thus making the ratio of protein to mRNA closer to 104.
As with many of the quantities described throughout the book, the high-throughput, genome-wide craze has hit the subject of this vignette as well. Specifically, using a combination of RNA-Seq to determine the mRNA copy numbers and mass spectrometry methods and ribosomal profiling to infer the protein content of cells, it is possible to go beyond the specific gene-by-gene estimates and measurements described above. As shown in Figure 3 for fission yeast, the genome-wide distribution of mRNA and protein confirms the estimates provided above showing more than a thousand-fold excess of protein to mRNA in most cases. Similarly, in mammalian cell lines a protein to mRNA ratio of about 10 4 is inferred (BNID 110236).
Figure 4: Dynamics of protein production. (A) Bursts in protein production resulting from multiple rounds of translation on the same mRNA molecule before it decays. (B) Distribution of burst sizes for the protein beta-galactosidase in E. coli. (Adapted from L. Cai et al., Nature, 440:358, 2006.)
So far, we have focused on the total number of protein copies per mRNA and not the number of proteins produced per production burst occurring from a given mRNA. This so-called burst size measurement is depicted in Figure 4, showing for the protein beta-galactosidase in E. coli the distribution of observed burst sizes, quickly decreasing from the common handful to much fewer cases of more than 10.
Finally, we note that there is a third meaning to the question that entitles this vignette, where we could ask how many proteins are made from each individual mRNA before it is degraded. For example, in fast growing E. coli, mRNAs are degraded roughly every 3 minutes as discussed in the vignette on “What is the degradation rates of mRNA and proteins?”. This time scale is some 10-100 times shorter than the cell cycle time. As a result, to move from the statement that the protein to mRNA ratio is typically 1000 to the number of proteins produced from an mRNA before it is degraded we need to divide the number of mRNA lifetimes per cell cycle. We find that in this rapidly dividing E. coli scenario, each mRNA gives rise to about 10-100 proteins before being degraded.
A recent study (G. Csardi et al., PLOS genetics, 2015) suggests revisiting the basic question of this vignette. Careful analysis of tens of studies on mRNA and protein levels in budding yeast, the most common model organism for such studies, suggests a non-linear relation where genes with high mRNA levels will have a higher protein to mRNA ratio than lowly expressed mRNAs. This suggests the correlation between mRNA and protein does not have a slope of 1 in log-log scale but rather a slope of about 1.6 which also explains why the dynamic range of proteins is significantly bigger than that of mRNA.
Color appearance: hue and saturation scaling
Figures 1a and 1b show, for the Newtonian-View and Maxwellian-View, respectively, group mean hue scaling data as a function of wavelength. The hue values have been re-scaled by their associated saturation functions such that the hues sum, not to 100%, but to the associated saturation values (see above) the means for females are shown by symbols and the means for males by continuous lines. The error bars are SEMs for clarity, those for males extend only below the data points, while those for females extend only above the data points. At each wavelength, the group means, and their SEMs, were obtained by averaging the R, Y, G, B values obtained from each participant.
Color appearance in the fovea of monochromatic lights. Group means of individual hue functions re-scaled by the associated saturation values hue values sum to the saturation values. (a) Stimuli seen in Newtonian view. Error bars (SEMs) extend above the symbols for females and below the lines for males. (b) Maxwellian view. (c) Data from Newtonian view smoothed and re-plotted on a two-dimensional color space (Uniform Appearance Diagram see text for details). (d) Data from Maxwellian view smoothed and re-plotted on a Uniform Appearance Diagram.
There appear to be sex-related differences, but they seem small, and it is not easy to appreciate their magnitude or direction. The effects are clearer when the data in Figure 1a, b are displayed in a color space, the UAD, in Figure 1c, d. The data for females are rotated slightly with respect to those of males: in most parts of the spectrum, the rotation of the female data is clockwise with respect to the male data – this rotation is implicit in the data and is not the result of an analytic manipulation. Consider, for example, the points labeled 510 nm in Figure 1c for females, the point is almost on the G-R vertical axis, meaning that the sensation is close to unique G but for males, the same wavelength is still within the BG quadrant, meaning that its wavelength must increase by a few nanometers in order for it to appear unique G.
We are dealing with data from two viewing conditions. We have previously shown that these conditions result in systematic differences in the hues elicited by each wavelength . We digress to show that the sex-related differences that are the central point of this paper are not due to any differences in viewing conditions.
Although the results were qualitatively similar, there is a problem that prevents us from simply amalgamating the two data sets. Despite having the same retinal illuminance there is an important difference: our stimuli were presented as brief flashes with a minimum ITI of 20 s in a darkened testing room, which meant that participants’ pupils were widely dilated. Thus, in the Newtonian-View much of the light entered through the periphery of the pupil and therefore struck the receptors at angles greater than those for light entering through the pupil center, as is the case in the Maxwellian-View. Such "edge" rays are known to produce changes on color appearance of monochromatic lights (Type-II Stiles-Crawford Effect (SC-II) [46, 47]).
Figure 2 compares our data from the two viewing conditions. The abscissa shows, for Maxwellian-view, the wavelengths that elicited a series of hue sensations, ranging from 100%B to 20%Y & 80%R, in 5% hue steps. (The range of hue ratios is restricted to those that were seen by all participants.) The ordinate shows the change in wavelength needed to produce the same hue sensation in Newtonian-view as from Maxwellian-view. The solid, group-mean, curve is re-drawn from our earlier paper comparing these viewing conditions possible reasons for the effect are discussed fully in that paper . The other two curves in Figure 2, from the data in this paper (Figure 1c, d), break down the effect by sex: there appear to be some sex-related differences, but they are not significant (see statistical analysis below).
Shift in stimulus wavelength required to make color appearance seen in Newtonian same as when seen in Maxwellian view. Group mean functions disaggregated by sex. Error bars (SEMs) extend above the symbols for females and below the symbols for males.
Statistical analysis of sex differences
Although the effects of sex on color appearance seemed consistent between the two viewing conditions, they were small and not identical. To demonstrate that the sex effects are real, we used an accepted way to amalgamate data sets that have different means: computation of individual differences from the means for each condition.
Because there were almost twice as many female participants as males, averaging across all participants would grossly bias the result towards the wavelengths required by females for each hue. Therefore, for each viewing condition, a global mean of the wavelengths required to elicit each hue was derived: Optical-System Mean = (Mean for males + mean for females)/2. Means were computed separately for each sex to remove the effects of differences in sample sizes between males and females.
Then, for each hue sensation from one of the two viewing conditions (Newtonian or Maxwellian), each participant’s required wavelength was subtracted from the mean wavelength for that participant’s sex. These differences from each Optical-System Mean, were combined into one large matrix, that was organized to retain sex and optical system descriptors.
The male–female differences in the wavelength required for a specific hue are shown in Figure 3. This figure shows clearly the central point of this paper: males require a slightly longer wavelength than do females to experience the same hue.
Sex differences in shifts in wavelengths (combined data from Newtonian and Maxwellian views see text for description of combination procedure) associated with specific hue sensations. Wavelengths required for females to experience a specific hue ratio (blue, blue/green, green/yellow, yellow/red) subtracted from the wavelengths required by males for the same hue ratios.
In Figure 3 the abscissa is a series of hue sensations, ranging from 100%B to 20%Y & 80%R, in 5% hue steps the range of hue ratios is restricted to those that were seen by all participants. (For example, only 30 females reported a sensation of 5%R & 95%B, and only 24 had a sensation of 25%R & 75%B of the males, only 18 reported a sensation of 5%R & 95%B, and only 14 had a sensation of 25%R & 75%B.) The results from the matrix combining all the data were averaged separately for males and females: the mean wavelength needed to elicit each hue for females was then subtracted from that for males. The results are plotted on the ordinate of Figure 3. For 56 out of 57 of these sensations, covering most of the visible spectrum, males require a longer wavelength than do females to experience a given hue sensation. This difference is also shown in Figure 1c, d: as we have noted (see above), for each viewing condition, the female data are rotated clockwise with respect to the male data.
An ANOVA (SPSS general linear model, repeated measures, mixed design) was run using the above global matrix: the factors were hue, sex, and optical system. (See Table 1.) While the sex effects were small, the effect of sex was significant: F(1, 92) = 7.004, p = 0.010. The degrees of freedom (92) were slightly less than the degrees of freedom expected from the total number of participants (105) this was due to some missing data points caused by minor errors in the computerized data acquisition. Data from participants with such missing data were excluded from the statistical analysis.
In Figure 3 the mean effect-size of the male-female differences in wavelength required to elicit each hue is 2.2 nm. Although, across the hues, there are variations in the differences, they are not significantly different from this mean (ANOVA: no significant effect of hue, see Table 1). That is, regardless of the particular hue, males required, on average, a wavelength 2.2 nm longer than the wavelength needed to elicit the same sensation from females. Similarly, there was no significant effect on the difference scores due to optical system: the results were the same when each participant’s data-set was compared to the appropriate mean for a given optical system. Among other things, this means that the possible sex differences in the data from the two viewing condition (see Figure 2) are not statistically significant. There were no other significant effects or interactions.
Rayleigh anomaloscope matches
Humans are polymorphic for the L- and M-cones (e.g. ). Furthermore, individuals may express more than one of these alleles, and there are sex differences in the relative numbers of L- and M-cones [14, 48, 49]. It is generally agreed that many females have phenotypes with multiple L- and M-photopigments. However there is some consensus that males may express only one of each (e.g. ). These findings may be the basis for the significant sex effects on hue that we report here.
Inter-individual variations in spectral sensitivities of L- and M-cones in individuals should affect their Rayleigh anomaloscope matches. In this test a bipartite filed is illuminated on one side with a light that appears Y the task is to match its appearance with an additive mix on the other side with two lights, one appearing G and the other appearing R. (Because all the wavelengths we used were longer than 520 nm, S-cones contributed essentially nothing to the outcome.) Most participants who scaled the appearances of monochromatic lights seen in Newtonian-view also used our anomaloscope to make Rayleigh matches.
Following the convention of Neitz and Jacobs , we derived an average measure of the matching RG value as follows. We pooled the RG ratios for all participants, regardless of sex, and weighted the R and G values so that the group average for the quantity R/(R + G) equaled 0.5. This value is the midpoint of the abscissae in Figures 4a, b these figures show the frequency distributions of this ratio for females and males respectively. High values of the ratio imply less effective R in the matching mix and low values imply less effective G. Individuals who lack the L-cone will have extremely high values for the ratio, while those who lack the M-cone will have extremely low values these individuals exhibit a form of color blindness (dichromacy) referred to as protanopia or deuteranopia respectively moderately high values indicate less severe (anomalous) forms of these deficiencies. All of our observers had ratios far from these extremes – they were color-normal.
Frequency distributions of anomaloscope matching ratios: R/(R + G), where R and G are re-weighted to produce a group mean of 0.5 (see text for details). (Only participants who also scaled hue and saturation of stimuli seen in Newtonian view.) (a) females. (b) males.
Because humans, particularly females, are polymorphic for the L- and M-cone genes, population Rayleigh matches might be expected to show multiple modes, as was shown in published data based on large samples: males had a bi-modal distribution, while females had a largely tri-modal distribution (e.g., ). Even though our distributions of matching RG ratios (Figure 4) do not differ greatly by sex, they do show, especially for the females, some of the sex-related differences reported previously. Each graph also includes the normal distributions expected (based on group means and variances) if only random variations were involved. Applying the Kolmogorov-Smirnov test (as included in SPSS), the frequency distribution for males was not significantly different from normality (p = 0.134), perhaps due to small sample size for females there was a significant difference from normality (p = 0.016).
We considered the distributions of Rayleigh matches because significant multiple modes point to possible sub-populations each sub-population could be associated with multiple modes in spectral distributions of unique hues. We examine this below.
Single cones can only report the rate at which their photopigments are absorbing photons – once a photon is absorbed, all information about its wavelength is lost. To provide information about wavelength (color), the nervous system must compare the responses of cones that contain different photopigments this comparison is done by spectrally-opponent cells in the retina – for example, a cone type that is more sensitive to longer wavelengths might excite these cells, while another cone type, more sensitive to shorter wavelengths, would inhibit them.
Spectrally-opponent systems seem ubiquitous in species with color vision, ranging from assorted shallow-water mullets of the family Mugilidae , to eels , to macaque monkeys . In the macaque, four types of spectrally-opponent cells have been identified in the retina and visual area of the thalamus (lateral geniculate nucleus) [53–56]. These opponent cells have spectral points at which excitation and inhibition are equal and there is no net response (null point). Psychological sensations of color also have spectral nulls – for example, the sensation that is only Y (unique Y) coincides with the null point for R vs. G (see Figure 1). However, the psychophysical null points (unique hues) do not coincide with the nulls of the spectrally-opponent cells. Sensations must ultimately depend on re-processing of these neural inputs to determine opponent hue (sensory) mechanisms .
The spectral loci of the unique hues are especially interesting because they define the null points of spectrally-opponent sensations – i.e., hue mechanisms. We argue that hue mechanisms are opponent, based on a variety of evidence, including observations that one half of an opponent system can be used to cancel the sensation of its opponent [57, 58]. Thus, unique Y occurs at the wavelength that elicits a sensation of neither R nor G this is the null point of the RG mechanism. The precise values of these loci therefore play an important role in constraining many models of color vision based on spectrally opponent processing of visual information (e.g., [30, 57, 59–61]).
UADs were plotted for each individual the wavelength for each unique hue was found by interpolation on the fitted spline. Figures 5a–c show the frequency distributions of the spectral loci of unique Y, G, and B. In these figures, for simplicity, we show only data for the Newtonian-view – the results and conclusions for the Maxwellian-view are very similar (e.g., see Table 2). For comparability across these three graphs, bin widths were set at 0.33 of the standard deviation for each distribution. (Some of the data points in these figures were included in , but here we have added a substantial number of new participants.) Note that for most individuals there is no spectral wavelength that corresponds to unique R – the longest wavelengths elicit a sensation that contains some Y. For each hue, we also show the expected distribution if the loci were normally distributed. From Kolmogorov-Smirnov tests (as included in SPSS), all the group data distributions differ significantly from their expected normal distributions: for Y, p = 0.0043 for G, p = 0.0004 for B, p = 0.0003. The significant differences from normality and the existence of sub-peaks suggest that for the unique hues, humans are not a homogeneous population. In particular the distribution of Y is very narrow, a finding that has also been reported by others using comparable sample sizes but very different psychophysical techniques .
Frequency distributions of spectral loci of unique hues (Newtonian view), together with normal distributions based on means and standard deviations of the data. Bin widths = 0.33 of SD for given distribution. (a) Yellow. (b) Green. (c) Blue.
The multiple peaks seen in the distributions in Figure 5 may be sex-related Figures 6a–c show the same data, but split between males and females to examine this. Applying the Kolmogorov-Smirnov test, the frequency distributions of females show significant deviations from normality: Y, p = 0.01 G, p = 0.0005 B, p = 0.0002. However, none of the male distributions differ significantly from expected normal distributions (possibly due to small sample size): Y, p = 0.152 G, p = 0.2 B, p = 0.119. But in all these cases, males have their loci shifted towards longer wavelengths, which reiterates the general finding that males require a longer wavelength than females to experience the same hue sensation (Figure 3).
Frequency distributions of spectral loci of unique hues (Newtonian view), disaggregated by sex. Bin widths = 0.33 of SD for given distribution. (a) Yellow. (b) Green. (c) Blue.
The similarity of the findings for the two viewing conditions is shown in Table 2: Regarding the spectral loci of the unique hues, the group mean wavelengths, for Newtonian and Maxwellian views, are given in Table 2. Not only are the values similar, but in all cases the values for males are shifted to longer wavelengths.
To examine whether there were any correlations among individuals’ spectral loci and their associated anomaloscope ratios, we computed R 2 values separately for males and females. None of the correlations, for any of the unique hues, was significant most were essentially flat lines with R 2 ranging from 0.001 (males, B-unique) to 0.17 (males, Y-unique). The lack of any clear correlations is interesting. Given the relatively broad range of the anomaloscope ratios, and the indications of sub-peaks, possibly related to expression of different L- and M-cone alleles, we might have expected a closer relation between a participant’s anomaloscope ratio and his or her locus of a spectral hue. Unique Y in particular is a function only of L- and M-cone inputs to the G-R opponent hue mechanism it coincides with the spectral null point of the G-R system. But, these cone inputs must be weighted relative to each other: specifically, the input from the M-cones must be weighted more strongly than that from the L-cones in order to shift unique Y to its observed spectral locus.
Furthermore, the relative weights of these inputs must be quite tightly constrained because the distribution of unique Y shown in Figure 5 is narrow (see  for a more complete discussion). This narrowness is remarkable for two reasons: firstly, there are differences in sensitivity among L- and M-cone spectral sensitivities, as shown by the range of anomaloscope ratios secondly, there are large variations among individuals in the relative numbers of these cones . The cortical weighting of the cone inputs to the G-R system must compensate for these individual differences.
Finally, we looked for any possible correlations between the spectral loci of each pair of unique hues. We found none, confirming similar earlier conclusions [61, 63].
Because an individual’s UAD for a particular viewing condition has a uniform metric, it can be used to derive a wavelength- discrimination function [23–27]. Participants’ functions were derived by measuring, for each stimulus, (along the spline function fitted to each individual's UAD – see above) the change in wavelength needed to produce a fixed, criterion change in sensation these wavelength shifts were averaged across participants to obtain group wavelength-discrimination functions for males and females.
Figure 7 shows wavelength-discrimination functions for the two optical viewing conditions broken down by sex. In Figure 7a we show the curves for the Newtonian View, and in Figure 7b the same for the Maxwellian View. The general trends are remarkably similar. While there are no statistically significant sex differences, the male and female curves are not identical. Applying Exploratory Data Analysis  to these data: there appear to be systematic differences between the sexes. In the middle of the spectrum, males have a slightly broader range of relatively poor discrimination (540–560 nm for Newtonian-view 530–570 nm for Maxwellian-view). We suggest that the sex differences in wavelength discrimination are real.
Group means of wavelength-discrimination functions derived from individual Uniform Appearance Diagrams disaggregated by sex. (a) Newtonian view. (b) Maxwellian view. Error bars: SEMs.
Materials and methods
|Reagent type |
|Designation||Source or |
|Antibody||anti-goat IgG |
(H + L) HiLyte
|AnaSpec ||(1:100) |
|Washington Regional |
|N/A||Macaca fascicularis, |
and Macaca mulatta
of both sexes,
aged 2 through
Tissue, cells, and solutions
Electrophysiological recordings were performed on primate retina obtained through the Tissue Distribution Program of the University of Washington’s Regional Primate Research Center. Recordings were made from retinas from Macaca fascicularis, Macaca nemestrina, and Macaca mulatta of both sexes, aged 2 through 20 years. All use of primate tissue was in accordance with the University of Washington Institutional Animal Care and Use Committee. Tissue was obtained and prepared as described previously (Angueyra and Rieke, 2013 Sinha et al., 2017). In short, retina was dark adapted (>1 hr) and stored in warm (32° C), oxygenated Ames medium this time was sufficient to fully dark-adapt the retina, as judged by responses to single photons in rods and downstream cells (Ahnelt et al., 1987 Ala-Laurila and Rieke, 2014). Following dark adaptation, a piece of retina roughly 2–3 mm on a side was separated from the pigment epithelium and placed photoreceptor side up (cone recordings) or down (retinal ganglion-cell recordings) on a poly-lysine-coated coverslip (BD Biosciences, San Jose, CA) that served as the floor of our recording chamber. For cone recordings, retina was treated with DNase1 (Sigma-Aldrich, St. Louis, MO) (30 units in
3 min) prior to placing it in the recording chamber. Throughout recordings, the tissue was continuously perfused with warm, oxygenated Ames solution.
Cone patch-clamp recordings were performed in intact pieces of flat-mounted retina as described previously (Angueyra and Rieke, 2013 Sinha et al., 2017). In short, we measured cone light responses using a combination of whole-cell voltage-clamp (holding potential = −60 mV not corrected for liquid junction potential) and current-clamp (holding current = 0 pA) recordings. Extracellular recordings from retinal ganglion cells were performed as described previously (Sinha et al., 2017). Data were low pass-filtered at 3 kHz, digitized at 10 kHz, and acquired using a Multiclamp 700B amplifier. No additional filtering was applied to any of the data presented except in Figure 4A. All recordings were controlled using Symphony Data Acquisition Software, an open-source, MATLAB-based electrophysiology software (https://github.com/symphony-das).
S cone identification
S cones make up a minority of the cone photoreceptors within the primate retina. While recording, the photoreceptor array was visualized using DIC microscopy, making all cones appear similar. Initially, we attempted to label S cones in in vitro retina using an antibody directed against the S-opsin molecule (anti-OPN1SW, sc-14363, Santa Cruz Biotechnology, Dallas, TX). Although we did not use this approach to collect any of the data reported here, it did help us learn to identify S cones based on their morphology and position within the photoreceptor array (Ahnelt et al., 1987 Packer et al., 2010). Targeting cones that appeared slightly smaller, recessed, and out of place within the photoreceptor array dramatically increased the probability that the cones were S cones. Targeting cones in this manner allowed us to efficiently collect a large number of S-cone recordings.
We selected cells for data collection based on the amplitude of their response to a bright flash. For current-clamp recordings, data was collected only from cones with a maximal response exceeding 10 mV. For voltage-clamp recordings, this criterion was 100 pA (at a holding voltage of −60 mV). The assumption underlying these criteria is that the cells with the largest responses most closely resemble cells in vivo. We controlled for three additional factors that could potentially bias our results. First, cone responses slow over time during whole-cell recordings, likely due to washout of intracellular components essential for phototransduction. We collected data for no longer than 4 min after initiating a recording to minimize this source of bias time to peak and response amplitude changed minimally during this time, and the response waveform did not noticeably change. Second, the data reported here was collected between 1 and 15 hr after collecting the retina, and cone responses could change over this time. To check this possibility, we grouped the cells into periods of 1–6, 6–10, and 10–15 hr post-retina collection. The time-to-peak of S-cone responses was significantly different from that of LM cones in each time window (data not shown). Third, responses can differ from one retina to the next. To control for such retina-to-retina differences, we referenced each recorded S cone to L and/or M cones recorded in the same retina. The differences we observe in responses collected across all recorded cones (Figure 1) were also present in responses of cells within a given piece of retina (Figure 2E and F and Figure 4C). The data reported here represents recordings from a total of 72 S cones, 60 M cones and 112 L cones from peripheral retina and 32 S cones, 25 M cones, and 28 L cones from foveal retina. For each dataset except Figure 6, we used at least five retinas from five animals. In Figure 6, we used two retinas from two animals.
Stimuli were presented from computer-driven LEDs with peak wavelengths of 406, 515, and 640 nm to provide the flexibility to effectively stimulate all three cone types. Light stimuli covered a
500 μm disk centered on the targeted cell. Following a successful recording from peripheral retina, we moved to a location on the retina outside the region exposed to light before attempting another recording this ensured that all recorded cells were fully dark adapted at the start of the recording. This was limited to 2–3 locations per fovea given its small size. The minimum light level used (500 R*/cone/s) was sufficient to effectively eliminate rod responses (Grimes et al., 2018).
All stimulus protocols were generated using custom-written MATLAB-based extensions of Symphony Data Acquisition Software, and delivered at 10 kHz. To calculate cone isomerization rates we measured LED spectra, used LED-power measurements, primate photoreceptor spectra from Baylor et al. (1987), and an effective collecting area of 0.37 μm 2 (Schnapf et al., 1990). For reference, one photopic troland is 10–30 R*/cone/s (Spillmann and Werner, 1990 Crook et al., 2009). Based on morphological differences, S cones could have a smaller collecting area than LM cones (Ahnelt et al., 1987 Packer et al., 2010) however, such differences are not likely to explain our results, as the responses of S cones at backgrounds of 50,000 R*/cone/s are slower than those of LM cones at 5000 R*/cone/s. Furthermore, cone collecting areas estimated for light delivered from above and below the retina (i.e. directly to the outer segment vs first traversing the inner segment) were similar thus, focusing of light via the inner segment contributes minimally in our preparations, and hence differences in inner segment size between S and LM cones are unlikely to affect the collecting area.
Cone-isolating stimuli were constructed using a matrix that mapped from LED input to our calculated isomerizations in each cone type. The inverse of this matrix maps from isomerizations in each cone type to an input to each LED. Using this matrix, we were able to specify our stimuli in terms of isomerizations to each cone type. Any failures in isolating S cones versus LM cones would decrease the magnitude of the kinetic differences we saw in the analysis presented in Figure 6.
To compute the linear filters in Figures 4 and 6, we presented time-varying Gaussian-noise stimuli with a 50% contrast (SD/mean) and 0–60 Hz bandwidth. This stimulus was presented at a mean luminance of 2500 R*/cone/s for the experiments of Figure 4 and either 1000 or 10,000 R*/cone/sec for the experiments of Figure 6.
Noise recordings in Figure 7 were based on 3 s light steps from darkness to different background-light levels. In a subset of cells, the step duration was increased and dim flashes were superimposed upon the step to provide the data necessary to compute detection thresholds.
All data were analyzed using custom-written MATLAB analysis routines.
All time-to-peak analyses were repeated using a series of different techniques to calculate the times to peak of cone flash responses and small bistratified ganglion-cell linear filters. Results remained significant regardless of which technique was used. For the first approach, we took the time at which the raw average response or linear filter reached its maximal value. Due to unavoidable noise, it was apparent in some recordings that a random spike in noise had affected the time to peak determination. To control for this, we used two fitting-based approaches. For the first, we fit a truncated Gaussian distribution spanning
20 ms to the peak region of the average flash response or linear filter. The time to peak was taken to be the time at which the Gaussian fit reached its maximal value. The final approach involved fitting a function previously shown (Angueyra and Rieke, 2013) to capture the structure of the flash response (Equation 1). We found this function to have the representational power necessary to fit both cone flash responses and SBC linear filters and we defined the time to peak as the time at which this fit function reached its maximal value. For cones, all times-to-peak response reported here were calculated using the truncated Gaussian fitting technique. Small bistratified cell linear filter times to peak were calculated using fits from Equation 1.
Noise and flash-response power spectra were calculated using MATLAB’s built-in fast Fourier transform and converted to two-sided power spectral densities with units of pA 2 /Hz. Dim-flash response recordings contain a combination of cellular noise, instrumental noise, and the flash response. To isolate the flash-response power, power-spectral densities were computed using fits to the dim-flash response (using Equation 1). To compute the power in different frequency ranges, the power spectral densities were integrated across the range.
Cellular-noise isolation was performed as in Angueyra and Rieke (2013). Any current fluctuations in a voltage-clamp recordings are a combination of noise arising in phototransduction in the cones (cellular noise) and noise from the recording itself (instrumental noise). Providing a near-saturating light stimulus shuts down phototransduction and isolates instrumental noise. Under the assumption that cellular and instrumental noise are independent, cellular noise can be isolated by subtracting the power spectrum of the noise in saturating light from the noise power spectrum at each background.
Temporal frequency-tuning curves
Frequency-tuning curves were constructed using a cone’s responses to sinusoidal stimuli across a range of frequencies. To quantify a cone’s response amplitude at a given frequency, the best fit was found using the following equation:
f was matched to the frequency of the stimulus. a , b and c were free to vary. The response amplitude was taken to be the fit value of a divided by the contrast of the stimulus. This contrast normalization step was necessary because higher contrasts were required to elicit responses at higher frequencies where the cells were less responsive. Before averaging tuning curves across cells, the tuning curve of each cell was normalized such that its amplitude at the frequency with the strongest response was 1.
The frequency at which a cone’s response decreased by a factor of 10 was calculated by interpolating a smooth function fit to its frequency-response curve. Under an assumption of linearity, the shape of the frequency-tuning curve is equivalent to the power spectrum of the cone flash response. Therefore, the best fit was found for each curve using the power spectrum of Equation 1. Best fits were found using the following loss function:
where F ( ω i , θ ) is the prediction from a fit with parameters θ at the frequency ω i and D ω i is the data.
Adaptation Curves: For each cell, average dim-flash responses across a range of background-light levels were fit with Equation 1. The amplitude of such a response was taken to be the amplitude of the fit function and converted to a response per isomerization by dividing by the flash strength. A cell’s response amplitudes per isomerization across background light levels were fit with a Weber curve:
The half-maximum amplitude of the adaptation curve was taken to be the value of I 0 from the fit. Fits were performed using the loss function from Equation 3. Prior to averaging adaptation curves across cells, each was scaled such that its best fit to Equation 4 would have a response per isomerization of 1 on a background of 0 R*/s.
Cone linear-filter calculation
Linear filters were computed using cone responses to Gaussian-noise stimuli as described previously (Wiener, 1949 Rieke et al., 1997 Chichilnisky, 2001).
LN-models were constructed from small bistratified ganglion-cell responses to Gaussian-noise stimuli through a series of steps. First, spike detection was performed. Then, the optimal linear filter mapping from the stimulus to binary vectors of spike responses was computed (Chichilnisky, 2001). Finally, a nonlinearity was calculated to map the output of a stimulus convolved with this linear filter (generator signal) to a probability of spiking. This was constructed by convolving each white-noise stimulus vector with the calculated linear filter and, based on the detected spikes, determining the probability of a spike given some generator signal. Nonlinearities were fit with Gaussian cumulative-distribution functions.
Organization of neurons into networks is a defining feature of a nervous system. Networks are essential for most complex computations and all conversions of sensory input to functional output. This network organization is accomplished by synapses, which provide the modes of communication between neurons. In all nervous systems, changes in synaptic strength are a fundamental tool to modify the network for a specific task, to emphasize a specific input or output, and to learn.
Two structurally and functionally different types of synapses, chemical and electrical, carry the burden of communication between neurons. Chemical synapses, with separate complex presynaptic and postsynaptic elements, have long been understood to be plastic, undergoing changes that strengthen or weaken the synapse under certain conditions. Gap junction-mediated electrical synapses are structurally simpler, giving rise to the misconception that they are also functionally simple. However, electrical synapses have been found to have great latitude for plasticity, contributing in many ways to the modification of network computations essential to optimize nervous system function. This review will briefly introduce electrical synapses and summarize the plastic mechanisms used to control neuronal coupling in order to optimize network functions.
Properties of electrical synaptic transmission
Gap junctions are composed of aggregates of intercellular channels that connect the cytoplasm of two cells, constituting a pathway for the diffusion of small intracellular solutes between cells [1, 2]. Besides this chemical coupling, gap junctions support electrical coupling based on their ability to allow the movement of ions, thus representing a low resistance pathway for the direct flow of electrical current between cells (Fig. 1a, b). Because gap junction communication occurs without the involvement of any intermediary messenger as in chemical synapses, they provide a fast mechanism for intercellular synaptic transmission.
Basic properties of electrical coupling. a Schematic drawing of experimental design for study electrophysiological properties of electrical synapses showing simultaneous intracellular recordings using the dual whole cell patch clamp technique applied to a pair of coupled cells. b When a hyperpolarizing current pulse is injected to cell 1 (I Cell 1) a voltage deflection is produced in that cell (V1) and also in the cell 2 (V2), although voltage change in the later is of smaller amplitude. Traces are representative drawings. c An action potential in one cell (cell 1) of an electrically coupled pair produces a coupling potential or spikelet in the other cell (cell 2), which present a much slower time course compared to the presynaptic spike. d Left, Drawing shows the equivalent circuit for a pair of coupled cells during current injection into cell 1 (oblique arrow, I) where R1 and R2 represent the membrane resistance of cell 1 and cell 2 respectively and Rj represents the junctional resistance. For a voltage change at steady state (red portion of traces in B) the membrane capacitance is fully charged and current is only resistive. Smaller arrows indicate the direction of current flow in the circuit. Right, Circuit representing the voltage divider constituted by the junctional resistance (Rj) connected in series to the membrane resistance of the postsynaptic cell (R2). Input voltage is the membrane voltage change in the presynaptic cell (cell 1, V1), whereas the output voltage of the divider is the membrane voltage change in the postsynaptic cell (cell 2, V2)
Beyond the first description in the motor giant synapse of the crayfish [3, 4], electrical transmission has been established in the nervous systems of many phyla, from primitive animals like jellyfish to more evolved ones like mammals . In the mammalian brain electrotonic coupling between neurons has been identified in almost every structure including the neocortex, hippocampus, inferior olivary nucleus, cerebellar cortex, trigeminal mesencephalic nucleus, vestibular nucleus, hypothalamus, the spinal cord and the retina among others (for review see ).
In many cases these junctions behave as simple ohmic resistors through which current flow is determined by the difference in membrane voltage of coupled cells (transjunctional voltage) and the resistance of the junction. As such they support bi-directional communication and tend to equalize the membrane potentials of coupled cells. This means that activation of any cell of a coupled pair will produce a comparable attenuated potential (the coupling potential or spikelet) in the other cell (Fig. 1c). These characteristics of gap junction mediated transmission determine two distinctive physiological properties of electrical synapses: high speed and sign conservation. Both of these characteristics may promote the synchronic activation of neuronal ensembles. However, beyond these two well-established and classical roles, electrical coupling in conjunction with properties of the non-junctional membrane of neurons provides mechanisms for more complex operations like inhibition, amplification and frequency selective transmission.
Determinants of the strength of electrical synapses
In most cases, electrical synapses can be considered to function as a simple resistance between two coupled neurons. Consequently, the degree to which a neuron is coupled to another can be described by the electrical influence a voltage change in one neuron has on its coupled neighbor, i.e. the coupling coefficient (C):
where V1 is the voltage of the “driver” cell and V2 is the voltage of the “follower” cell. From this relationship it is evident that coupling potentials present the same sign as presynaptic signals but are smaller in amplitude (Fig. 1b). In the absence of voltage dependent mechanisms in the postsynaptic cell this coefficient varies between 0 and 1, and the bigger its value the stronger the degree to which two cells are electrically coupled.
For a voltage change at steady state the simplest electrical representation of two cells connected by a gap junction is the circuit depicted in the left panel of Fig. 1d, where Rj represents the junctional resistance, and R1 and R2 the membrane resistance of coupled cells . Current injected in cell 1 present two parallel pathways to flow, one through R1 and the other involving Rj and R2, thus producing a voltage change in both the presynaptic cell (V1) and in the postsynaptic cell (V2). On the other hand, because Rj and R2 are connected in series they constitute a voltage divider or attenuator that is, a simple circuit where the input voltage is split among the two components in a proportional fashion according to the value of their resistances, being the input voltage V1 and the output V2 (Fig. 1d, right panel). In a voltage divider the output voltage depends on the input voltage according the following equation :
From the above analysis it can be concluded that the coupling coefficient depends both on the junctional resistance and the membrane (non-junctional) resistance of the second postsynaptic cell . However, the strength of electrical transmission does not depend on the absolute value of any of these resistances but instead on the relationship between them (see below). While the junctional resistance depends on the properties of intercellular gap junction channels, the membrane resistance depends on the number of channels of the non-junctional membrane open at resting potential and is a major determinant of the input resistance of neurons and hence of the way they respond to synaptic inputs.
Plasticity of electrical synapses
Given that the strength of coupling between neurons depends on dynamic factors such as the resistance of the gap junction and the membrane resistance of the postsynaptic cell, it should be clear that coupling also changes dynamically. Indeed, all aspects that control electrical synaptic strength can change over a wide variety of time scales ranging from milliseconds to days, with different mechanisms participating at different time scales. These mechanisms will be treated separately below.
Changes in conductance of electrical synapses
Voltage gating of connexin channels
Like many other membrane ion channels, gap junction channels display some degree of voltage sensitivity [1, 9]. Voltage gating of connexin channels results in shifts to a low conductance state or subconductance state at the level of the individual channel . Dynamic voltage gating has been observed to occur during cardiac myocyte action potentials  and contributes to the waveform and propagation of the action potential through the syncytium. This gating behavior was attributed largely to Cx43 channels, which are the dominant connexin in cardiac myocytes.
In contrast to cardiac gap junctions, gap junction channels formed by Cx36, the main synaptic connexin of the mammalian brain, present a weak voltage-dependency. In fact, junctional conductance is nearly insensitive to transjunctional voltage up to ±30 mV and declines gradually to
60 % over a 90 mV range. Moreover, the time course of the underlying gating process requires hundreds of milliseconds to seconds to reach the steady state [11–13]. While gating processes of gap junction channels are able to produce a substantial modification of the junctional conductance, these changes occur in time scales several orders of magnitude larger than that of single spikes and synaptic potentials, the main source of coupling potentials in physiological conditions. Thus electrical synapses composed of Cx36 are unlikely to be susceptible to voltage gating during normal neuronal activity.
Other connexins that form electrical synapses in the vertebrate nervous systems exhibit more robust voltage gating. Cx45, which is present in a small number of electrical synapses, is particularly sensitive to transjunctional voltage [14, 15], with half maximal reduction of the voltage-sensitive conductance at 13.4 mV in the steady state. While voltage gating of connexin channels is driven largely by the “fast gate” , the kinetics of this mechanism are nonetheless somewhat slow and unlikely to have a large impact on channel conductance during a neuronal action potential. However, gating is likely to occur in neurons that use sustained, graded voltage signaling such as retinal bipolar cells, some of which do use Cx45 in electrical synapses [16–18]. The impact of any such changes on electrical signaling is unknown.
Phosphorylation and dephosphorylation of channels
Very significant changes in the overall conductance of gap junction channels that form electrical synapses occur through signaling pathways that result in phosphorylation or dephosphorylation of connexins. Studies of retinal horizontal cells have shown that catecholamines, dopamine in particular, reduce the receptive field size and tracer coupling [19–22]. These effects were shown to result from activation of a D1 dopamine receptor that elevated intracellular cAMP via adenylyl cyclase activity [23–25], and depended on activation of protein kinase A . The reduced electrical coupling in fish horizontal cells resulted from a reduction in the open probability of the gap junction channels without a change in unitary conductance . The horizontal cells in fish contain several connexins: Cx55.5, Cx52.6, and Cx52.9 have all been identified in zebrafish [28–30]. It is not clear which, if any, of these contribute to the plasticity that has been observed in horizontal cells from the fish species studied physiologically.
The vast majority of electrical synapses in the mammalian central nervous system utilize Cx36 (homologous to Cx35 in non-mammalian vertebrates). A number of in vitro studies have shown that electrical or tracer coupling via this connexin is regulated by phosphorylation driven by cAMP/PKA [31, 32], nitric oxide/PKG , and Ca 2+ /CaMKII signaling pathways [34, 35], with a few conserved phosphorylation sites being key regulators of coupling. The biophysical basis of changes in macroscopic coupling has not been elucidated but changes in channel open probability, based upon changes in mean open time, have been suggested as the mechanism of plasticity .
A number of studies have revealed that Cx36 phosphorylation state changes with conditions that change coupling and is an accurate, and essentially linear, predictor of coupling as assessed by tracer transfer [36–39]. In retinal neurons, phosphorylation-dependent changes in coupling are driven by light adaptation [38–40] and/or circadian rhythms [41–43]. The signaling pathways that control these changes have been studied in detail in photoreceptor and AII amacrine cells in recent years, revealing a common theme of regulation by well-defined opposing signaling pathways.
A role for dopamine D2-like receptors in controlling rod to cone photoreceptor coupling has been known for some time [44, 45]. In rodents, this is actually a D4 receptor [39, 46], which inhibits adenylyl cyclase via Gi and reduces cAMP level. Phosphorylation of Cx36 is controlled by protein kinase A (PKA) activity, changing in response to alteration of cytoplasmic cAMP [38, 39, 47] (Fig. 2). In both mouse and zebrafish, the action of the dopamine D4 receptor is opposed by the action of a Gs-coupled adenosine A2a receptor [39, 47]. Secreted dopamine and extracellular adenosine levels vary in retina in opposite phase and are both regulated by circadian rhythms : dopamine is high in the daytime or subjective day while adenosine is high in nighttime or subjective night. Li et al.  have recently found that the Adenosine A1 receptor is also present. The Gi-coupled A1 receptor has higher affinity for adenosine than does the A2a and is activated in the daytime by the lower extracellular adenosine level that remains. This A1 receptor activation reinforces the inhibitory action of the dopamine D4 receptor on adenylyl cyclase, strongly suppressing Cx36 phosphorylation and photoreceptor coupling in the daytime . Since all three receptors act on the same target, adenylyl cyclase, the regulation of Cx36 phosphorylation and photoreceptor coupling is a steep biphasic function that keeps coupling minimal during the daytime (Fig. 2).
Signaling pathways that control coupling in two types of retinal neuron. Coupling through Cx36 gap junctions is regulated by Cx36 phosphorylation through an order of magnitude dynamic range. Phosphorylation enhances coupling and pathways that promote Cx36 phosphorylation are colored green in this diagram while those that reduce phosphorylation are colored red. Elements colored blue are hypothesized to play a role but have not been specifically demonstrated. a Retinal AII amacrine cell coupling is increased by Cam Kinase II phosphorylation driven by Ca 2+ influx through non-synaptic NMDA-type glutamate receptors. This process depends on spillover glutamate derived from bipolar cells and is enhanced by activation of synaptic AMPA-type glutamate receptors that depolarize the cell. Reduction of Cx36 phosphorylation is driven by an independent pathway in which activation of D1 dopamine receptors increases adenylyl cyclase activity, activating protein kinase A, which in turn activates protein phosphatase 2A. Protein phosphatase 1 suppresses this pathway. Both pathways are activated by light, but with different thresholds, leading to an inverted U-shaped light adaptation curve. b Photoreceptor coupling is enhanced by Cx36 phosphorylation driven directly by protein kinase A activity under control of adenylyl cyclase (AC). AC activity is in turn controlled by an intricate set of G-protein coupled receptors regulated by circadian time and light adaptation. Darkness during the night phase increases extracellular adenosine such that activation of A2a adenosine receptors dominates signaling and activates AC. Light adaptation or subjective daytime result in reduced extracellular adenosine and increased dopamine secretion such that activation of dopamine D4 receptors dominates signaling to suppress AC activity. A1 adenosine receptors supplement this effect. The opposing signaling pathways routed through a common effector impart a steep monophasic character to the light adaptation and circadian control of coupling in this neural network
In retinal AII amacrine cells, plasticity of electrical coupling has been recognized for nearly 25 years . This plasticity is driven by light, with a biphasic pattern showing very low coupling in prolonged dark-adapted conditions, high coupling with low-intensity illumination, and low coupling again with bright illumination [50, 51]. The bright light-driven reduction in coupling is mediated by dopamine, with dopamine D1 receptors increasing adenylyl cyclase activity and enhancing protein kinase A activity [49, 52]. AII amacrine cells use Cx36 , and the suppression of coupling by protein kinase A activity is inconsistent with the positive effect that protein kinase A activity has on photoreceptor coupling mediated by Cx36 [38, 39]. This contradiction was resolved by Kothmann et al. , who demonstrated that PKA activity in turn activated protein phosphatase 2A to drive dephosphorylation of Cx36 in AII amacrine cells (Fig. 2), resulting in uncoupling.
The ascending leg of the AII amacrine cell’s biphasic light adaptation curve depends on the activity of glutamatergic On pathway bipolar cells, which are first-order excitatory interneurons postsynaptic to photoreceptors. Like other forms of activity-dependent potentiation, enhancement of AII amacrine cell coupling results from activation of NMDA receptors, Ca 2+ influx, and activation of Cam Kinase II, which phosphorylates Cx36 . The NMDA receptors on AII amacrine cells are non-synaptic and are closely associated with Cx36 , so their activation depends on spillover glutamate. This most likely comes from rod bipolar cells, which are presynaptic to the AII amacrine cell, but may also come from cone On bipolar cells that are nearby. Because the signaling pathways in AII amacrine cells that phosphorylate and dephosphorylate Cx36 are independent (Fig. 2) and have different illumination thresholds, the light adaptation curve of the AII amacrine cell shows its characteristic biphasic pattern.
The activity-dependent potentiation of AII amacrine cell electrical synapses resembles that originally described in the mixed synapse of auditory VIIIth nerve club endings onto Mauthner cells in the goldfish [54, 55]. Plasticity in the Mauthner cell differs in that the NMDA receptors that provide the Ca 2+ signal are synaptic and require high-frequency stimulation to potentiate. A similar form of plasticity dependent upon non-synaptic NMDA receptors has also been described recently in rat inferior olive neurons .
A variety of other signaling pathways have been found to modulate electrical synapses. In interneurons of the thalamic reticular nucleus (TRN), excitatory input depresses electrical synapses through activation of metabotropic glutamate receptors (mGluRs) . This signaling has been explored in detail recently. Both group I and group II mGluRs modulate coupling, but with opposite effects . The dominant effect appears to be through activation of Group I mGluRs, which produce long-term depression by activation of a Gs signaling pathway, stimulating adenylyl cyclase and activating PKA. However, selective activation of the group II receptor mGluR3 promotes long-term potentiation through activation of Gi/o . This shares the same pathway, routing ultimately through PKA activity. Since TRN neurons employ Cx36 , through which electrical coupling is increased by phosphorylation [35, 37–39], this signaling mechanism must include a PKA-activated phosphatase to reduce Cx36 phosphorylation upon PKA activation in a manner similar to that in retinal AII amacrine cells.
Histamine H1 and H2 receptors have been found to modulate coupling among various populations of neurons in the supraoptic nucleus [60, 61]. H2 receptors signal through adenylyl cyclase, but H1 receptors instead activate NO synthase, signaling through nitric oxide, guanylyl cyclase, and protein kinase G. A potentially similar nitric oxide-driven signaling pathway also selectively regulates the heterologous electrical synapses between retinal AII amacrine cells and cone On bipolar cells . Thus it is apparent that a wide variety of signaling pathways have been employed to regulate electrical synaptic strength via connexin phosphorylation and dephosphorylation in different neurons throughout the central nervous system.
Changes in number of channels
Changes in the expression level of connexins provide a mechanism to alter coupling over time scales of hours to weeks. Such changes are most prominent in development. Electrical coupling in most areas of the vertebrate CNS tends to increase to high levels in early phases of development, and then reduce again [62–64]. One study found that activation of group II mGLuRs was responsible for the developmental increase of coupling, acting both through transcriptional and post-transcriptional mechanisms .
A surprisingly similar increase in neuronal coupling is also seen following various types of injury . Ischemic injuries result in an increase in neuronal coupling and the level of Cx36 protein, without an apparent increase in transcript level [67, 68]. This has been attributed to group II mGluR activation, as was the developmental increase, with dependence on a cAMP/PKA signaling pathway . Traumatic injuries [69, 70] and seizures [71, 72] also result in increases of neuronal coupling, although these insults lead to increases in Cx36 transcript level. In these contexts, alteration in the expression level of connexins that form electrical synapses are important factors in long term changes in neuronal coupling.
Electrical coupling of mature neurons is critically dependent on maintenance of a steady state population of gap junction proteins. A recent study showed that electrical coupling in goldfish Mauthner cell mixed synapses was reduced within a few minutes if perturbed by peptides that disrupted stabilizing interactions of Cx35 with scaffolding proteins or blocked SNARE-mediated trafficking of new Cx35 . Another study found circadian regulation of Cx36 transcript and protein levels in photoreceptors . These studies reveal that electrical synapses are dynamic structures whose channels are turned over actively, suggesting that regulated trafficking of connexons may contribute to the modification of gap junctional conductance.
The role of the passive properties of the postsynaptic cell
The membrane resistance of the postsynaptic cell
As previously mentioned electrical coupling depends on both the resistance of the gap junction and the membrane resistance of the postsynaptic cell. In fact, while changes of the gap junction resistance due to modifications of the single channel conductance or the number of intercellular channels might produce significant changes in the coupling coefficient, modifications of the postsynaptic membrane can also underlie significant and highly dynamic changes in the strength of electrical coupling representing an additional point of regulation. The fact that the junctional resistance (Rj) and the membrane resistance of the postsynaptic cell (R2) constitute a voltage divider (Fig. 1d) implies that when Rj is big compared to R2 most of the input voltage will drop across Rj and only a minor fraction across R2 meaning a modest voltage change in the postsynaptic cell which corresponds to a low coupling coefficient. In contrast, if R2 is big compared to Rj a correspondingly big fraction of the input voltage (V1) will appear across the membrane of the postsynaptic cell (V2). A large voltage drop across R2 corresponds to a large coupling coefficient meaning that cells are strongly coupled. This dependency of coupling coefficient on the input resistance of the postsynaptic cell determines the directionality of transmission when electrical coupling occurs between cells of dissimilar input resistances. In fact, electrical transmission will be more efficient from the lower input resistance to the higher input resistance cell in comparison to the opposite direction. Therefore, despite of the presence of non-rectifying contacts, symmetrical communication will occur only when connected cells present similar input resistances. Hence, the directionality of electrical transmission imposed by asymmetry of passive properties of connected cells might be a key determinant of the flow of information within neural circuits.
Modification of passive membrane properties by synaptic inputs
Interestingly, modifications of the membrane resistance (Rm) of coupled cells due to nearby chemically mediated synaptic actions can significantly modulate the strength of electrical coupling in a highly dynamical fashion [5, 75]. In fact, as these synaptic actions usually involve changes of membrane permeability to different ion species, they are accompanied by corresponding changes in membrane resistance of the postsynaptic cell and hence of the strength of electrical coupling. Typically, excitatory synaptic actions are mediated either by increased membrane permeability to Na + and K + (decreased Rm) or by a decreased permeability to K + (increased Rm). Usually, synaptic actions are defined by the sign of its effect on membrane potential of the postsynaptic cell (depolarization versus hyperpolarization). What is remarkable is that although both synaptic actions are depolarizing shifts of membrane voltage they have opposite effects on the efficacy of electrical transmission. Whereas synaptic actions involving an increase in Rm enhance the strength of coupling, a reduction in Rm elicits an uncoupling of electrically connected cells . A similar shunting effect by nearby GABAergic inputs has been proposed to underlie decoupling in pairs of inferior olivary neurons [77, 78]. These results indicate that the membrane resistance of the postsynaptic cell is a key element for regulating electrical coupling, being as important as the junctional resistance. This means that changes in the efficacy of electrical synapses might be accomplished through modification of either of these two resistances. Alternatively, when electrical coupling is expected to be constant in order to assure stable network function, changes in electrophysiological properties of coupled cells require corresponding changes of junctional resistance. In fact, concurrent changes of the junctional and membrane resistances of coupled cells in a homeostatic fashion has been proposed to underlie the stability of electrical coupling strength between neurons of the thalamic reticular nucleus during development .
The time constant of the postsynaptic cell
The time course of membrane voltage changes is dominated by the cell’s capacitance, which results from the ability of biological membranes to separate electrical charges. In fact, while a simple ohmic resistor responds to a step current with a similar voltage step, cells show voltage responses that rise and decay more slowly than the current step (Fig. 1b). This property of the membrane can be modeled by a resistor connected in parallel to a capacitor. The ability of this circuit to slow down changes in voltage results from the fact that a discharged capacitance offers no resistance to current flow, determining that at the beginning of the current step all current will flow through the capacitance and nothing through the resistance. As the capacitance gets charged it progressively develops more resistance to current and more current will flow through the resistance .
This circuit comprises a simple low pass filter for input currents characterized by its time constant. Indeed, the resistance of the gap junction connected in series to the parallel resistance and capacitance of the postsynaptic cell behaves as a low-pass filter determining that the high-frequency components of presynaptic signals are comparatively more attenuated. That is, slow fluctuations of membrane voltage pass more effectively between cells than do fast signals [7, 81]. This is a characteristic property of electrical transmission and underlies the fact that coupling potentials present a slower time course in comparison to the presynaptic signals that generated them (Fig. 1c). As a result of this property, a delay of postsynaptic responses is introduced with respect to the presynaptic signals. This property of low-pass filters, known as phase lag, represents the synaptic delay of electrical synapses. Although current begins to flow across the junction without delay, time is required for charging the postsynaptic capacitance to a significant level to generate a detectable voltage change above the noise level .
Early descriptions of electrical synapses in invertebrates already proposed that these contacts present low-pass filtering characteristics [4, 82, 83]. More recently, filtering characteristics of electrical transmission between mammalian central neurons have been demonstrated by using dual whole cell patch recordings and injecting sinusoidal currents of different frequencies (Fig. 3b). Under these experimental conditions, coupling coefficients and phase lag were determined as a function of sinusoidal frequency. This experimental approach in different cell types like GABAergic interneurons of the neocortex [84–86], neurons of the thalamic reticular nucleus , Golgi cells of the cerebellum , retinal AII amacrine cells  among others, confirmed that electrical transmission presents low-pass filter characteristics, allowing the passage of low frequency signals but strongly attenuating and delaying signals of higher frequency .
Frequency selectivity of electrical transmission. a Equivalent circuit of a pair of coupled cells including the passive elements (resistance and capacitance, black) and active voltage-dependent conductances (INap and IK) represented as a variable resistor in series to an EMF. b Top panel, Sinusoidal current waveform of increasing frequency (ZAP protocol) is injected into cell 1 (I Cell 1) in order to test the frequency-dependent properties of electrical transmission between coupled cells. Middle, Superimposed are depicted the voltage membrane responses of the presynaptic cell (Vm Cell 1) and of the postsynaptic cell (Vm Cell 2) for a pair of coupled cells which include only passive elements (RC circuit, black elements in circuit in A). Both responses are characteristics of a low-pass filter where amplitude of membrane response decreases monotonically as sinusoidal frequency increases. Bottom, By contrast, when cells present passive and active voltage-dependent currents (IK and INap) membrane responses present certain frequency selectivity where signals close to the characteristic frequency are of bigger amplitude compared to signals whose frequency lie far from this value. c Schematic plot of the frequency transfer characteristics of electrical transmission calculated as the ratio of the FFT of the postsynaptic membrane response over the FFT of the presynaptic membrane response depicted in B, for a pair of passive cells (gray trace) and for a pair of cells which also present resonant and amplifying currents (IK and INap respectively). Whereas transfer function when cells present only passive elements show the typical profile of a low-pass filter (gray trace), the presence of voltage-dependent currents determines that transmission of signals near the characteristic frequency (vertical dashed line) is less attenuated, determining a maximum in the function (red trace). Traces are representative drawings
This property of electrical synapses determines that slow potential changes (typically subthreshold) are preferentially transmitted over action potentials, endowing electrical synapses with the ability to transmit different information than the spikes transmitted via chemical synapses. For instance, in cell types where action potentials are followed by a large and prolonged after-hyperpolarization (AHP) due to the delayed activation of a voltage- and/or Ca ++ dependent K + current, coupling potentials tend to be predominantly hyperpolarizing events. This phenomenon results from the low-pass filter properties of electrical transmission. In fact, because the high-frequency components of the fast presynaptic action potential are more attenuated than the slow AHP, the coupling potential results in a net hyperpolarizing signal, inhibiting neural activity rather than promoting activation of the postsynaptic neuron [85, 87, 89]. In the cerebellar cortex, this effect has been involved in the desynchronization of the population of Golgi cells due to sparse depolarizing synaptic inputs .
The role of the active membrane properties of the postsynaptic cell
Electrophysiological properties of neurons
In addition to the passive membrane properties (those that are linear with respect to the membrane voltage), excitable cells like neurons present active membrane properties, which are highly non-linear mechanisms due to complex time and voltage dependent processes. The most remarkable outcome of the active membrane properties is the action potential generation underlain by the classical Na + and K + conductances described by Hodgkin and Huxley in the squid axon . Despite these spike-generating mechanisms which allow neurons to communicate over long distances in a non-decremental fashion, excitable cells usually present a large variety of subthreshold active properties. These active mechanisms along with the passive properties establish the way neurons integrate spatially and temporally distributed synaptic inputs, and how these inputs are translated or encoded into a time series of action potentials. The active membrane properties of neurons depend on the kind, density and distribution of voltage operated ion channels in the surface membrane of the different cellular compartments. Central neurons present a rich repertoire of voltage operated membrane ion channels that endow them with powerful encoding capabilities represented by the ability to transform their inputs into complex firing patterns. Indeed, neurons express tens of different voltage operated membrane conductances according to their ion selectivity, voltage range of activation, kinetics, presence of inactivation, and modulation by intracellular second messengers giving rise to a wide variety of electrophysiological phenotypes [92–95].
Voltage dependency of coupling potential
Despite the limited voltage gating of connexin intercellular channels imposed by its slow kinetics, electrical coupling between neurons might present marked voltage-dependency. However, this phenomenon does not represent a voltage dependent property of the gap junctions but instead are supported by the active properties of the non-junctional membrane of the postsynaptic cell. For instance, in fish a pair of gigantic command neurons, the Mauthner cells, which are responsible for the initiation of escape responses, are contacted by a special class of auditory afferents through mixed electrical and chemical synaptic contacts . These electrical contacts not only allow the forward transmission of signals (from afferents to the Mauthner cell), but also support retrograde transmission by allowing the spread of dendritic postsynaptic depolarizations to the presynaptic afferents. Moreover, retrograde coupling potentials in the afferents present a marked voltage dependency. In fact, depolarization of the membrane potential of these afferents evokes a dramatic increase in coupling potential amplitude, eventually enough to activate them, and hence supporting a mechanism of lateral excitation whereby the sound-evoked activation of some afferents can recruit more afferents to reinforce the synaptic action on Mauthner cells [97, 98]. This amplifying mechanism is blocked by extracellular application of tetrodotoxin (TTX) or intracellular injection of QX-314, strongly suggesting the involvement of a Na + current. Additionally, its subthreshold voltage range of activation, among other properties, indicates that the persistent sodium current (INap) of these afferents is the underlying mechanism of this amplification .
The INap is a non-inactivating fraction of the Na + current, which activates at subthreshold membrane voltages and is particularly well suited to perform such amplification because of its rapid kinetics and subthreshold membrane voltage range of activation. In the mammalian brain similar amplifying mechanisms of coupling potentials involving Na + currents have been described in the mesencephalic trigeminal (MesV) nucleus of the rat . This cell population is coupled mostly in pairs and activation of one neuron of an electrically coupled pair produces a spikelet in the postsynaptic cell (Fig. 1c). This coupling potential critically depends on the membrane potential, being enhanced by depolarization of the postsynaptic cell and eventually triggering an action potential in this cell. This spikelet exhibits a positive correlation with the membrane potential of the postsynaptic cell, and because of its voltage range of activation and sensitivity to sodium channel blockers it represent the activation of a persistent sodium current . Similar amplifying mechanism has been proposed in the cerebellar cortex [87, 100] and the thalamic reticular nucleus . Thus, the INap endows electrical coupling with voltage-dependent amplification, suggesting a relevant contribution of active membrane conductances in regulating the efficacy of electrical transmission between neurons. Moreover, as such amplification of electrotonic potentials might be enough to recruit the postsynaptic cell, it tends to synchronize the activity of networks of neurons, emphasizing the role of active conductances in the dynamics of networks of electrically coupled neurons.
Frequency selective transmission
Most typically electrical transmission between neurons possesses low-pass filter properties imposed by the RC circuit of the postsynaptic cell. In contrast, electrical coupling between MesV neurons show band-pass filter properties where signals with frequencies in the range of 50 to 80 Hz are preferentially transmitted, even better than DC signals (Fig. 3) . Accordingly, transmission of spikes through these contacts is significantly more efficient than in electrical contacts between FS or LTS interneurons of the neocortex, whose frequency transfer resembles a low-pass filter . This suggests that electrical transmission between MesV neurons is well suited for the transmission of action potentials, which most probably constitute the main signal source for coupling and promotes the synchronic activation of pairs of MesV neurons .
This frequency selectivity or band-pass characteristics results from the resonant properties of MesV neurons. Resonance is a property that enables neurons to discriminate between its inputs on the basis of their frequency content, so that synaptic inputs with frequency content close to the resonant frequency will produce the largest responses. Resonance arises from the interplay of two mechanisms with specific frequency-domain properties: the passive and the active membrane properties. As previously discussed, passive properties due to the capacitance in parallel with the conductance of the membrane act as a low-pass filter (whose cutoff frequency is set by the time constant of the membrane), attenuating responses to inputs with high frequency content. On the other hand, certain voltage-dependent conductances that actively oppose changes in membrane voltage, like K + currents, might confer high-pass filter properties (whose cutoff frequency is set by its activation time constant), thus attenuating responses to inputs with low frequency content. While these two mechanisms with opposite filter properties are present in almost every neuronal type, as low-pass filtering due to the RC circuit is a basic property of biological membranes and K + currents are ubiquitous conductances, not every neuron expresses resonance. In fact, to produce resonance K + current must activate slowly compared to the membrane time constant. Thus, the combination of these two mechanisms with appropriate cutoff frequencies creates a band-pass or resonant filter, capable of rejecting inputs whose frequencies lie outside this band .
Although the combination of these two mechanisms sets the frequency of resonance, its expression typically depends on the activation of amplifying currents. Such currents are essentially the inverse of resonant currents, that is, they amplify voltage changes and activate quickly relative to the membrane time constant. The persistent Na + current is an example of such an amplifying current whose interaction with resonant currents enhances resonance. This frequency preference endows neurons with the ability to generate spontaneous membrane voltage oscillations and repetitive discharges, or to respond best to inputs within a narrow frequency window . In the context of electrical synaptic transmission, resonance will determine that signals with frequency content near the resonant frequency will be more readily transmitted than other signals, even better than DC signals, promoting the transmission of signals of biological relevance (Fig. 3).
MesV neurons are endowed with a rich repertoire of voltage-gated membrane conductances, like the A-type K + current (IA) and the INap supporting resonance, which results in the generation of membrane voltage subthreshold oscillations and repetitive discharges in the range of 50 to 100 Hz [103, 104]. Consistently, electrical transmission between MesV neurons exhibits band-pass filter characteristics instead of the classical low-pass filter properties . In fact, the assessment of the filter properties by means of injecting frequency-modulated sine wave currents (ZAP protocols, Fig. 3b) and calculating the ratio of the Fast Fourier Transform (FFT) of the postsynaptic voltage changes over the FFT of the presynaptic voltage changes, showed a peak in the range of 50 – 100 Hz (Fig. 3c) . Thus the frequency transfer function of electrical transmission between MesV neurons presents a maximum at frequencies near 80 Hz, indicating that transmission of electrical signals between MesV neurons exhibits some degree of frequency preference and therefore does not behave as a simple low-pass filter . Consistent with the critical role of active membrane properties in determining frequency selective transmission at these electrical contacts, the addition of TTX (0.5 μM) to the extracellular solution results in a reduction of the amplitude of the transfer function, particularly for values around 50–80 Hz, indicating the participation of Na + conductances. The subsequent addition of 4-AP (1 mM), a blocker of the A-type current among other K + conductances, further modifies the transfer characteristics resembling now the properties of a simple low-pass filter (Fig. 3c). These voltage dependent conductances not only improve transmission in terms of the amplitude of postsynaptic signals, but also by reducing the phase lag between presynaptic and postsynaptic responses. Hence, while amplification increases the efficacy of synaptic transmission the mitigation of the phase lag at the same frequency range improves its accuracy, promoting the synchronic activation of pairs of coupled MesV neurons .
Therefore, the active membrane properties of neurons might play a critical role in synaptic electrical transmission by providing an extremely sensitive mechanism of voltage dependent amplification of electrical coupling potentials and endowing this modality of interneuronal communication with frequency selectivity. Moreover, modulation of voltage dependent conductances of the non-junctional membrane by the action of neurotransmitters represents a potential source of modulation of the efficacy of electrical transmission.
|Program Chair(s)||Date||Location of Event|
|Larry Anderson, MD, PhD |
Ajai Chari, MD
|May 1, 2021||Virtual||View Recording||View Slides|
|Melissa Alsina, MD |
Saad Z. Usmani, MD
|February 27, 2021||Virtual||View Recording||View Slides|
|Irene Ghobrial, MD |
Shaji K. Kumar, MD
|December 19, 2020||Virtual||View |
|Jonathan L. Kaufman, MD |
Sagar Lonial, MD
|September 26, 2020||Virtual||View Recording||View Slides|
|Robert Z. Orlowski MD, PhD |
Elisabeth E. Manansanch, MD
|May 2, 2020||Virtual||View Recording||View Slides|
Human diseases and disorders have ultimate, evolutionary causes as well as proximate, mechanistic ones. Evolutionary causes of disease include evolutionary legacies (which can constrain adaptation and potentiate maladaptation), tradeoffs (which can prevent optimization of structure or function), mismatches to modernity (which can cause maladaptation via divergence between phenotypes and environments), genetic conflicts (which can cause or generate new scope for maladaptation) and benefits to reproduction that are coupled with health-related costs to the self [ 1]. Integrated analyses of the proximate and ultimate causes of disease can deepen insights into both disease etiology and evolutionary processes. Such studies are especially important when diseases are human-specific or human-elaborated, and derive, in part, from effects of recent selection along the lineage leading to modern humans.
In this article, we synthesize ultimate with proximate causes in the study of endometriosis. We first describe the symptoms and diagnostic features of endometriosis, and review previous hypotheses on its physiological determinants. Second, we analyze the bases for endometriosis by applying Niko Tinbergen's [ 2] four questions for analyzing phenotypes: (1) phylogeny and evolutionary history, (2) development, (3) mechanism and (4) adaptive significance. Addressing these four questions allows for an integrative, interdisciplinary analysis of the causes and correlates of endometriosis and the traits that characterize it, in the context of understanding why and how this disorder persists in humans given its high heritability. In doing so, we also describe, evaluate and provide tests of a recently developed hypothesis for a primary cause of endometriosis: that its development is driven by relatively low levels of testosterone in prenatal development [ 3].
Regulation in the Apical Meristem
Diversification of cells in the apical meristem is a complex process controlled by a number of genes. In effect, these genes determine the shape and structure of a plant. As the apical meristem grows, it branches of smaller meristem locations, which will develop into branches of the stems and roots. The timing and number of these events are controlled by a series of genes within plants. The various expressions of these genes leads to different forms, some of which are more successful than others. The interaction between these genes and the growth of the apical meristem has led to the millions of different species of plants which exist today.
The variety of forms in plants is attributable almost solely to the differences in how their apical meristem functions. Some plants, like bushes, branch continuously and equally, while plants like pine trees have a single main branch. The root apical meristem is likewise responsible for root development. Roots can be deep, and focused on a single branch, such as tap-root, common to many weeds. Corn and bamboo, on the other hand, has much more dispersed and fibrous root system, which depends on lots of branching and lateral roots.
1. What is the difference between an apical meristem and an intercalary meristem?
A. No difference
B. The apical meristem is at the tip
C. Intercalary meristems can be apical
2. How can the apical meristem be manipulated to increase the harvest of a crop?
A. They can be cut to create a bushy plant
B. More meristems means more fruit
C. They can’t be manipulated
3. How is the apical meristem similar to stem cells in a human fetus?
A. Both have the ability to differentiate
B. They are completely different
C. They divide in the same way