Earth’s 24-h-rotation around its axis is mirrored in the circadian clock that resides within each of our cells, controlling expression of ~10% of all genes. The circadian clock is constructed as a negative feedback loop, in which clock proteins inhibit their own synthesis. During the last decade, a picture has emerged in which each cell is a self-sustained circadian oscillator that runs even without synchronizing cues. Here, we investigated state-of-the-art single-cell bioluminescence recordings of clock gene expression. It turns out that these time series are very well described by low-dimensional models, enabling us to extract descriptive parameters that characterize each cell. We find that different cell types do not differ much in their dynamics. However, different mutations in core clock genes yield different dynamic characteristics. Furthermore, we could not statistically reject the idea that the cells are in fact damped oscillators driven by noise. We thus declare the question of whether the circadian clock is a damped or self-sustained oscillator still unresolved. Further, we propose a way to resolve this question by examining the frequency-dependent response of single cells to periodic stimuli. We will then be in a better position to understand how cells coordinate and synchronize their circadian rhythms.
Light is an abundant signal that many organisms use to assess the status of their environment. Species from all kingdoms have evolved the capacity to sense and respond to wavelengths across the visible spectrum. Light has long been linked to disease (Figure 1); however, the mechanisms behind many of these observations are not well understood. Recently, a direct link has been established between specific protein photosensors and the ability to cause disease in both pathogenic bacteria and fungi -; thus, certain pathogens require these photosensors for full virulence. A role for photoperception is likely to emerge as a common theme in microbial pathogenesis.
One of the primary obstacles to understanding the computational underpinnings of biological vision is its sheer scale–the visual system is a massively parallel computer, comprised of billions of elements. While this scale has historically been beyond the reach of even the fastest super-computing systems, recent advances in commodity graphics processors (such as those found in the PlayStation 3 and high-end NVIDIA graphics cards) have made unprecedented computational resources broadly available. Here, we describe a high-throughput approach that harnesses the power of modern graphics hardware to search a vast space of large-scale, biologically inspired candidate models of the visual system. The best of these models, drawn from thousands of candidates, outperformed a variety of state-of-the-art vision systems across a range of object and face recognition tasks. We argue that these experiments point a new way forward, both in the creation of machine vision systems and in providing insights into the computational underpinnings of biological vision.