Tuesday, 16 December 2014

Slew Rate

Slew Rate — the op amp speed limit

Slewing behavior of op amps is often misunderstood. It’s a meaty topic so let’s sort it out.
The input circuitry of an op amp circuit generally has a very small voltage between the inputs ideally zero, right? But a sudden change in the input signal temporarily drives the feedback loop out of balance creating a differential error voltage between the op amp inputs. This causes the output to race off to correct the error. The larger the error, the faster it goes… that is until the differential input voltage is large enough to drive the op amp into slewing.

If the input step is large enough, the accelerator is jammed to the floor. More input will not make the output move faster. Figure 1 shows why in a simple op amp circuit. With a constant input voltage to the closed-loop circuit there is zero voltage between the op amp inputs. The input stage is balanced and the current IS1 splits equally between the two input transistors. With a step function change in Vin, greater than 350mV for this circuit, all the IS1 current is steered to one side of the input transistor pair and that current charges (or discharges) the Miller compensation capacitor, C1. The output slew rate (SR) is the rate at which IS1 charges C1, equal to IS1/C1.

There are variations, of course. Op amps with slew-enhancement add circuitry to detect this overdriven condition and enlist additional current sources to charge C1 faster but they still have a limited slew rate. The positive and negative slew rates may not be perfectly matched. They are close to equal in this simple circuit but this can vary with different op amps. The voltage to slew an input stage (350mV for this design) varies from approximately 100mV to 1V or more, depending on the op amp.
While the output is slewing it can’t respond to incremental changes in the input. The input stage is overdriven and the output rate-of-change is maxed out. But once the output voltage nears its final value the error voltage across the op amp inputs reenters the linear range. Then the rate of change gradually reduces to make a smooth landing at the final value.

There nothing inherently wrong with slewing an op amp—no damage or fines for speeding. But to avoid gross distortion of sine waves, the signal frequency and/or output amplitude must be limited so that the maximum slope does not exceed the amplifier’s slew rate. Figure 2 shows that the maximum slope of a sine wave is proportional to VP and frequency. With 20% less than the required slew rate, output is distorted into a nearly triangle shape.


Large-signal square waves with very fast edges tilt on the rising and falling edges according to the slew rate of the amplifier. The final portion of a rising or falling edge will have rounding as the amplifier reaches its small-signal range as shown in figure 1.

In a non-inverting circuit, a minimum 350mV step is required to make this op amp slew, regardless of gain. Figure 3 shows the slewing behavior for a 1V input step with gains of 1, 2 and 4. The slew rate is the same for each gain. In G=1, the output waveform transitions to small-signal behavior in the final 350mV. In G=2 and G=4 the small-signal portion is proportionally larger because the error signal fed back to the inverting input is attenuated by the feedback network. If connected in a gain greater than 50, this amplifier would be unlikely to slew because a 350mV step would overdrive the output.




Slew rate is usually specified in V/μs, perhaps because early general purpose op amps had slew rates in the range of 1V/μs. Very high speed amplifiers are in the 1000V/μs range, but you would rarely see it written as 1kV/μs or 1V/ns. Likewise, a nanopower op amp might be specified as 0.02V/μs but seldom as 20V/ms or 20mV/μs. There’s just no good reason why for some things; it’s just the way we do it. :-)

LETS WIND UP WITH THIS VIDEOS






courtsy: bruce , electronics lab.com
thanks : WGK

Opamp Basics

Basics of Opamp circuits – a tutorial on how to understand most opamp circuits

This tutorial discusses some general rules of thumb that make it easy to understand and analyze the operation of most opamp circuits. It presents some ideal properties of opamps, and discusses how negative feedback generally causes the input voltage difference to be equal to zero (input voltages are made equal by the action of negative feedback). In other words, the output will do whatever it can to make the input voltages equal. Applying this simple fact makes it easy to analyze most opamp circuits.










Tactile Holograms

Tactile Holograms






When MC Hammer rapped ‘You can’t touch this’ little did he know of the work being carried out by a group of scientists at Bristol University. The team led by Dr Ben Long and colleagues Professor Sriram Subramanian, Sue Ann Seah and Tom Carter have produced an ultrasonic sound system able to generate 3D shapes in mid-air that can be felt.
The apparatus consists of a 16 x 20 array of ultrasonic transducers. The phase of the signals driving the individual transducers is altered to produce a shaped pressure wavefront. This is similar to the technique used in phased-array radar systems which steer a radar beam by altering the drive signal phase to an array of transmitting antennae. The team from Bristol have demonstrated how the system can produce a ‘pressure profile’ of a 3D shape in space that is tactile. By aiming the array of ultrasonic transducers at the surface of an oil bath the team show how the pressure waves making up basic three dimensional shapes such as pyramids and spheres can be made visible on the oil surface.
Taking into account ultrasonic wavelengths it is unlikely that much small detail can be represented but according to Dr Long “Touchable holograms, immersive virtual reality that you can feel and complex touchable controls in free space, are all possible ways of using this system. In the future, people could feel holograms of objects that would not otherwise be touchable, such as feeling the differences between materials in a CT scan or understanding the shapes of artefacts in a museum”.

EA DOG displays

Industrial applications rely on the EA DOG displays



Minimal power consumption, slim design and a big amount of available versions with multi-color backlight – these are some benefits of the EADOG series displays.
EADOG series is familiar to many of you and probably it´s your favorite one from these main reasons:
  • displays are unusually flat (thin)
  • the have a very low power consumption of 100-s uA (without backlight)
  • wide possibilities of backlight, monochrome and also RGB
  • some types are well legible even without backlight
  • simple communication through 4/8 bit or SPI interface and newly even I2C
So far, types with up to 128x64px or 3×16 characters were available. The most recent additions to the EADOG family are bigger types with resolution of 160x104px (EADOGXL160), 240x64px (EADOGM240), 240x128px (EADOGXL240) and 4×20 characters (EADOGM204) and appropriate backlight modules EALED66x40, EALED94x40 and EALED94x67. Also these new types maintain a low profile – only 5.8 or 6.5mm with backlighting. A positivity is that even these new types are based on standard LCD controllers.

Thursday, 2 October 2014

World’s Most Precise Clock

Introducing the World’s Most Precise Clock


An optical-lattice clock could lose just a second every 13.8 billion years





Photos: Top: National Physical Laboratory; Bottom: Jérôme Lodewyck
Time Transformed: In the photo at top, John V.L. Parry [left] and Louis Essen stand with the first cesium atomic clock, in 1956, at the United Kingdom’s National Physical Laboratory. The instrument paved the way for a redefinition of the second in 1967. At bottom is one of two modern optical-lattice clocks that have been built at the Paris Observatory.

In 1967, time underwent a dramatic shift. That was the year the key increment of time—the second—went from being defined as a tiny fraction of a year to something much more stable and fundamental: the time it takes for radiation absorbed and emitted by a cesium atom to undergo a certain number of cycles.
This change, which was officially adopted in the International System of Units, was driven by a technological leap. From the 1910s until the mid-1950s, the most precise way of keeping time was to synchronize the best quartz clocks to Earth’s motion around the sun. This was done by using telescopes and other instruments to periodically measure the movement of stars across the sky. But in 1955, the accuracy of this method was easily bested by the first cesium atomic clock, which made its debut at the United Kingdom’s National Physical Laboratory, on the outskirts of London.
Cesium clocks, which are essentially very precise oscillators, use microwave radiation to excite electrons and get a fix on a frequency that’s intrinsic to the cesium atom. When the technology first emerged, researchers could finally resolve a known imperfection in their previous time standard: the slight, irregular speedups and slowdowns in Earth’s rotation. Now, cesium clocks are so ubiquitous that we tend to forget how integral they are to modern life: We wouldn’t have the Global Positioning System without them. They also help synchronize Internet and cellphone communications, tie together telescope arrays, and test fundamental physics. Through our cellphones, or via low-frequency radio synchronization, cesium time standards trickle down to many of the clocks we use daily.
The accuracy of the cesium clock has improved greatly since 1955, increasing by a factor of 10 or so every decade. Nowadays, timekeeping based on cesium clocks accrues errors at a rate of just 0.02 nanosecond per day. If we had started such a clock when Earth began, about 4.5 billion years ago, it would be off by only about 30 seconds today.
But we can do better. A new generation of atomic clocks that use laser light instead of microwave radiation can divide time more finely. About six years ago, researchers completed single-ion versions of these optical clocks, made with an ion of either aluminum or mercury. These surpassed the accuracy of cesium clocks by a full order of magnitude.
Now, a new offshoot of this technology, the optical-lattice clock (OLC), has taken the lead. Unlike single-ion clocks, which yield one measurement of frequency at a time, OLCs can simultaneously measure thousands of atoms held in place by a powerful standing laser beam, driving down statistical uncertainty. In the past year, these clocks have managed to surpass the best single-ion optical clocks in both accuracy and stability. With further development, they will lose no more than a second over 13.8 billion years—the present-day age of the universe.
So why should you care about clocks of such mind-boggling accuracy? They are already making an impact. Some scientists are using optical-lattice clocks as tools to test fundamental physics. And others are looking at the possibility of using them to better measure differences in how fast time elapses at various points on Earth—a result of gravity’s distortion of the passage of time as described by Einstein’s theory of general relativity. The power to measure such minuscule perturbations may seem hopelessly esoteric. But it could have important real-world applications. We could, for example, improve our ability to forecast volcanic eruptions and earthquakes and more reliably detect oil and gas underground. And one day, in the not-too-distant future, OLCs could enable yet another shift in the way we define time.
According to the rules of quantum mechanics, the energy of an electron bound to an atom is quantized. This means that an electron can occupy only one of a discrete number of orbiting zones, or orbitals, around an atom’s nucleus, although it can jump from one orbital to another by absorbing or emitting energy in the form of electromagnetic radiation. Because energy is conserved, this absorption or emission will happen only if the energy corresponding to the frequency of this radiation matches the energy difference between the two orbitals involved in the transition.
Atomic clocks work by exploiting this behavior. Atoms—of cesium, for example—are manipulated so that their electrons all occupy the lowest-energy orbital. The atoms are then hit with a specific frequency of electromagnetic radiation, which can cause an electron to jump up to a higher-energy orbital—the excited “clock state.” The likelihood of this transition depends on the frequency of the radiation that’s directed at the atom: The closer it is to the actual frequency of the clock transition, the higher the probability that the transition will occur.
To probe how often it happens, scientists use a second source of radiation to excite electrons that remain in the lowest-energy state into a short-lived, higher-energy state. These electrons release photons each time they relax back down from this transient state, and the resulting radiation can be picked up with a photosensor, such as a camera or a photomultiplier tube.

Magic Transition: In an optical-lattice clock, an electron [yellow dot] can absorb electromagnetic radiation to jump from a lower- to a higher-energy orbital around a clock atom’s nucleus [center]. Light used to trap the atom can shift the natural energy of each orbital [dotted lines] down in energy [solid lines]. This would ordinarily change the energy associated with the jump. But for a “magic wavelength” of trapping light, the energy shift of each orbital will be identical, and the frequency of the transition will remain the same.
If few photons are detected, it means that electrons are largely making the clock transition, and the incoming frequency is a good match. If many photons are being released, it means that most electrons were not excited by the clock signal. A servo-driven feedback loop is used to tune the radiation source so its frequency is always close to the atomic transition.
Converting this frequency reference into a clock that ticks off the seconds requires additional steps. Generally, the frequency measured in an atomic clock is used to calibrate other frequency sources, such as hydrogen masers and quartz clocks. A “counter,” made using basic analog circuitry, can be connected to a hydrogen maser to convert its electromagnetic signal into a clock that can count off ticks to mark the time.
The most common atomic clocks today use atoms of cesium-133, which has an electron transition that lies in the microwave range of the electromagnetic spectrum. If the atom is held at absolute zero and is unperturbed (more on that in a moment), this transition will occur at a frequency of exactly 9,192,631,770 hertz. And indeed, this is how we define the second in the International System of Units—it is the time it takes for 9,192,631,770 cycles of 9,192,631,770-Hz radiation to occur.
In actuality, cesium-133 isn’t so perfect a pendulum. Atoms experience various forms of perturbation because of their imperfect environment. For example, an atom’s motion through space, which in the laboratory can easily be as fast as 100 meters per second, can shift the frequency of an electron transition by means of the Doppler effect. This is the same phenomenon that affects the pitch of ambulance sirens and other sounds as the source of the sound moves relative to the listener. Interactions with the electron clouds of other atoms can also alter the energies of electron states, as can stray external electromagnetic fields.
Perturbations decrease a clock’s accuracy: how much the atom’s average frequency is shifted from its natural unperturbed value. A number of these offsets can be accounted for, and changes in clock design have helped minimize these shifts. Indeed, one of the most dramatic such improvements occurred in the early 1990s, when physicists developed the fountain clock. This clock uses a laser to launch cooled cesium atoms upward, as if they were water droplets from a fountain, so that the Doppler shift caused by the upward motion cancels out nearly all of the shift that occurs as they fall.
But nowadays cesium clocks can’t be improved much more. Tiny gains are increasingly difficult to achieve, and any gains we try to make now will take a long time. That’s because cesium clocks are pushing the limit of the other key metric we use to evaluate clocks: the stability of their frequency.
Frequency stability characterizes how clock frequency fluctuates over time. The bigger the frequency instability, the greater the frequency noise, so the clock frequency will sometimes be a bit higher and sometimes a bit lower than its average value.
Careful engineering can minimize most sources of frequency noise. But there’s a fundamental source of instability that is very difficult to overcome, because it comes from the probabilistic nature of quantum mechanics. To understand it, let’s go back to the basic operating principle of an atomic clock.
Magic Transition: In an optical-lattice clock, an electron [yellow dot] can absorb electromagnetic radiation to jump from a lower- to a higher-energy orbital around a clock atom’s nucleus [center]. Light used to trap the atom can shift the natural energy of each orbital [dotted lines] down in energy [solid lines]. This would ordinarily change the energy associated with the jump. But for a “magic wavelength” of trapping light, the energy shift of each orbital will be identical, and the frequency of the transition will remain the same.
We typically excite the electrons in an atomic clock with radiation whose frequency doesn’t quite match the transition frequency. That’s because the probability that an electron will be excited follows a bell-curve-like distribution. On the sides of the bell curve, it’s easier to see whether a small change in frequency has occurred because it produces a more detectable effect, more dramatically increasing or decreasing the likelihood that an electron is excited [see illustration, “Finding the Frequency”]. Because of this, during the ordinary operation of an atomic clock, the clock radiation is set so that it has only a 50 percent probability of getting any given atom to make the clock transition. But even if the clock radiation frequency is set precisely at that point, an electron will be in either an excited or an unexcited state after it’s measured. The servo loop will then wrongly assume that the clock radiation frequency is either too high or too low and will introduce an undue frequency correction.
These miscorrections yield additional noise in the clock that we call quantum projection noise (QPN), and they are the main source of frequency instability in the best cesium clocks. Like many random sources of noise, the average level of QPN decreases with time. The longer you observe the clock, the more often the random upward shifts in frequency cancel out the downward shifts, and the noise eventually becomes negligible.
The catch is that this takes a long time in cesium: It takes about a day for the stability of the best cesium clocks to reach 2 parts in 1016 —their steady-state accuracy level. (Metrologists commonly measure quantities such as accuracy and stability in fractional units. For a cesium clock with a frequency of 9.2 gigahertz, an accuracy of 2 × 1016 —translates to an uncertainty of 1.8 microhertz in the frequency.)
You could run a series of experiments to make cesium clocks more accurate. But each measurement would have to consist of a lot of data taken over a very long time in order to minimize random fluctuations from measurement to measurement. In a series of experiments designed to push the clock accuracy down to 1 part in 1017, a 20-fold improvement, it could take an entire year just to make a single measurement.
Fortunately, there are other ways to minimize QPN. The noise is the same regardless of frequency, but its relative impact decreases the higher in frequency you go. And just as the average QPN decreases the longer you observe a clock, increasing the number of atoms you interrogate at the same time will boost the signal-to-noise ratio. The more you can sample in one go, the less uncertainty you’ll have in the number of atoms that made the clock transition.
Magic Transition: In an optical-lattice clock, an electron [yellow dot] can absorb electromagnetic radiation to jump from a lower- to a higher-energy orbital around a clock atom’s nucleus [center]. Light used to trap the atom can shift the natural energy of each orbital [dotted lines] down in energy [solid lines]. This would ordinarily change the energy associated with the jump. But for a “magic wavelength” of trapping light, the energy shift of each orbital will be identical, and the frequency of the transition will remain the same.
Moving to higher frequencies is what motivated work on the optical atomic clock. The first of these clocks was developed in the early 1980s, and nowadays they can be built from any of a number of neutral or ionized versions of elements, including mercury, strontium, calcium, ytterbium, and aluminum. What they all have in common are relatively high resonance frequencies, which lie in the optical spectrum around several hundred thousand gigahertz—10,000 times cesium’s frequency. Using a higher frequency lowers the QPN, and it also lowers the relative impact of several factors that can shift the clock frequency. These include interactions with external magnetic fields coming from Earth or nearby metal (or, in Paris, the Métro lines). As an added bonus, if an optical clock is built with ions, those charged atoms can easily be trapped in an oscillating electric field that will cancel out most of their motion, effectively eliminating the Doppler effect.
But optical clocks have limitations of their own. If all other aspects of a clock are the same, the move to optical frequencies should lower the QPN to 0.01 percent of what it is in cesium. But many optical clocks are made with ions instead of neutral atoms, such as those used in cesium clocks. Because they’re charged, ions are fairly easy to trap, but they also easily push on one another when placed close together, creating motion that’s hard to control and causing a Doppler frequency shift. As a result, such clocks tend to use just one ion at a time and so are only about 20 times as stable and 25 times as accurate as the best cesium clocks, which can easily contain a million atoms. To get closer to the factor-of-10,000 boost in stability promised by optical clocks, we must find a way to boost the number of atoms in the optical clock, simultaneously interrogating many atoms so that the QPN averages out. And with the optical-lattice clock, researchers realized they could go quite big, measuring not just a handful of atoms but 10,000 or more at the same time.
It certainly isn’t easy. To build a clock out of 10,000 atoms, you must find a way to make an atomic ensemble that is both tightly confined (to minimize the Doppler effect) and very low in density (to minimize electromagnetic interactions among the atoms). The atoms in a typical crystal move too fast and interact too strongly to work, so the best way to proceed is to produce an artificial material with a lattice of your own creation.
To build an optical-lattice clock, we start much the same way we do in many cold-atom experiments, with an ensemble of slow-moving, laser-cooled neutral atoms. We send these into a vacuum vessel containing a single laser beam that has been reflected back on itself. An interference pattern arises in the areas where the beam overlaps with itself, creating an optical lattice made of thousands of small “pancakes” of light. The atoms fall into the lattice like eggs into an egg carton because of a force that draws each of them toward a spot where the light intensity is at a maximum. Once the atoms are in place, we use a separate “clock laser” to excite the atoms so that we can measure the frequency of the clock transition.
The difficulty is that the clock atoms aren’t so easy to coerce into this lattice. Inexpensive lasers have outputs in the milliwatts. To create a lattice strong enough to trap and hold a neutral atom, you need several watts of light. Such a powerful laser beam, however, can shift energy levels in clock atoms, pushing their transition frequency far from their natural state. The amount of this shift will vary with the intensity of the trapping light, and that intensity is hard to control. Even with very careful calibration, this large frequency shift would render the clock much more inaccurate than even the very first cesium clocks.
Fortunately, physicist Hidetoshi Katori conceived a workaround in the early 2000s. When atoms are hit with the trapping light, the energy associated with each electron orbital decreases. Katori, then at the University of Tokyo, noted that each orbital will respond differently, with an energy shift that will depend on the wavelength of the trapping light. For a specific, “magic” wavelength, the shift of both orbitals will be identical, and so the energy difference between the two orbitals will be unchanged. This magic wavelength, where the clock frequency stays the same whether the atoms are trapped or not, is different for each element. For strontium, it’s 813 nanometers, in the infrared part of the spectrum. Ytterbium’s magic wavelength is 759 nm; mercury’s is in the ultraviolet part of the spectrum, at 362 nm.

Time Marches On: Atomic clocks have made great strides since their start in 1955, improving in accuracy by a factor of 10 or so each decade. Cesium clocks [green], which employ microwave radiation to interrogate ensembles of cesium atoms, were the first. These were surpassed in accuracy in the 2000s by optical clocks [pink], which use laser light and often just a single ion. This year, optical-lattice clocks, which incorporate thousands of atoms [blue], became the most accurate atomic clocks. The symbol by each optical-lattice point denotes the atomic species used in the clock: strontium (Sr), mercury (Hg), and ytterbium (Yb).

When Katori made his proposal, my group at the Paris Observatory’sSystèmes de Référence Temps-Espace (LNE-SYRTE) department, which is responsible for maintaining France’s reference time and frequency signals, had already been investigating the use of strontium for optical clocks. We set to work almost immediately to see if we could make an optical-lattice clock using strontium, competing at first with just two other groups that had long-standing experience working with cooled strontium: Katori’s team in Tokyo and Jun Ye’s group at JILA, in Boulder, Colo. A decade and many projects later, other groups have built lattice clocks using strontium and ytterbium. More experimental projects using mercury or magnesium, which require still higher-frequency and less-well-developed lasers, are also in the works.
One of the key factors in making optical-lattice clocks more accurate over the past few years has been the development of clock lasers with very narrow spectra—essentially just a small spike at one particular frequency. We need these to effectively explore the region around the transition frequency of the clock, to see in fine detail how a slight shift in the clock frequency affects the transition probability.
The best way to make narrow-lined laser light is to feed it into a mirrored chamber called a Fabry-Pérot cavity. After bouncing back and forth up to a million times inside this cavity, light of any arbitrary wavelength will have interfered with itself and canceled itself out. Only laser light with a wavelength that is a unit fraction of the length of the cavity emerges.
While the cavity helps to filter out natural fluctuations in the frequency of a laser source, the technique isn’t perfect. The frequency of the clock laser that emerges from the cavity can wobble around because of thermal fluctuations that cause the cavity to slightly expand or contract.
But over the past few years, researchers have found ways to help mitigate this effect. Cavities were made longer, so the relative impact of a small change in length is smaller. Vibrations were damped. The cavities were also cooled to cryogenic temperatures, to limit tiny expansions and contractions due to thermal energy.
The net result was much more stable clock lasers. Nowadays, over the few seconds it takes to prepare and probe clock atoms, a 429-terahertz clock laser might drift in frequency by just 40 millihertz or so. For a typical cavity, with a length of a few dozen centimeters, that amounts to changes in its length of no more than a few percent of the size of a proton for the several seconds it takes to prepare and probe the atoms in the optical clock.
Largely due to this effort, the stability reached within one day with cesium clocks, or within a few minutes with optical-ion clocks, can now be reached in 1 second with an optical-lattice clock, close to the QPN limit. This improved stability makes the clock itself a tool. The less time you need to gather data to measure an atomic clock’s frequency with precision, the faster you can use the clock to run experiments to explore ways to make it better. Indeed, just three years after the first frequency stability improvements were demonstrated in optical-lattice clocks at the U.S. National Institute of Standards and Technology, these clocks took the lead for accuracy. The published record is now held by one of the strontium OLCs at JILA, which boasts an estimated accuracy of 6.4 parts in 1018.
A clock is only so good on its own. Evaluating one clock requires another, comparable clock to serve as a reference. When OLCs were first developed a decade ago, the initial comparisons were done between strontium OLCs and cesium clocks. These measurements were enough to establish the early promise of OLCs. But to truly ascertain the accuracy of an atomic clock, it’s crucial to directly compare two clocks of the same type. If they are as accurate as advertised, their frequencies should be identical.
So as soon as we had finished building one strontium optical-lattice clock in 2007, we began work on a second. We finished the second clock in 2011, and set to work making the first comparison between two optical-lattice clocks in order to directly establish their accuracy, without relying on cesium clocks.
Once a second clock is built, previously undetectable problems soon become apparent. And indeed, we soon uncovered flaws that had been overlooked. One was the influence of static electric charges that had become trapped on the windows of the vacuum chamber. We had to shine ultraviolet light on the windows to efficiently dislodge the charges.
In a paper that appeared last year in Nature Communications, we showed that our two strontium OLCs agree down to the 1 part in 1016 level, a solid confirmation that these clocks are more accurate than the best cesium clocks. Earlier this year, Katori’s team at the research institution Riken, in Wako, Japan, reported an agreement of a few parts in 1018 in similar clocks, this time enclosed in a cryogenic environment.
Incidentally, the frequency of an optical clock is so fast that no electronic device could possibly count its ticks. These sorts of clock comparisons rely on a new technology that’s still very much in development: the frequency comb. This instrument uses femtoseconds-long laser pulses to create a spectrum that consists of coherent, equally spaced teeth that span the visible and infrared spectrum. In effect, it acts like a ruler for optical frequencies.
The ability to perform comparisons between OLCs pushes us further along the road to redefining the second. Before a redefinition in the International System of Units can take place, a large number of laboratories must demonstrate not only that they can implement the new standard but also compare their measurements. Consensus is needed to establish that all the laboratories are on the same page. It is also necessary to ensure that the world can literally keep time: Coordinated Universal Time, the time by which the world’s clocks are set, and the International Atomic Time it’s derived from, are created by making a weighted average of a large number of microwave clocks around the world.
Cesium clocks are “networked” using the signals emitted by satellites and are compared by microwave transmission. This is good enough for microwave clocks but too unstable for distributing more-accurate optical-lattice clock signals. But soon, international comparisons of optical clocks will reach a new milestone. New fiber connections, built with dedicated phase-compensation systems that can cancel small timing shifts introduced by the lines, are now being constructed.
By the end of this year, thanks to a number of national and international projects, we expect to be able to start using such connections to make the first comparisons between optical-lattice clocks based at LNE-SYRTE in Paris and the Physikalisch-Technische Bundesanstalt, Germany’s national metrology center, in Braunschweig. A link to the National Physical Laboratory, in London, which has strontium- and ytterbium-ion clocks, is also set to be completed early next year. These efforts will pave the way for an international metrology network that could enable a new standard for the second.
In the meantime, scientists have already begun using optical-lattice clocks as a tool to explore nature. One focus has been on measuring the frequency ratio between two clocks that use different types of atoms. This ratio depends on fundamental physical constants, such as the fine-structure constant, which could reveal new physics if it turns out to vary in time or from place to place.
Astronomers may also benefit from optical clocks. Atomic clocks are used as a time reference in radio astronomy, allowing astronomers to combine the light collected by telescopes separated by hundreds or thousands of kilometers to produce a virtual telescope, with an angular resolution equivalent to that of a single telescope spanning that entire distance. As optical atomic clocks mature, they could enable a similar feat for optical telescopes.
And it’s not hard to imagine that optical-lattice clocks could offer new insight into the world beneath our feet. According to Einstein’s theory of general relativity, a clock sitting on a denser part of Earth will tick slower relative to one situated on a part that’s less dense. Although gravimeters can be used to measure gravitational force at any one point, measuring gravitational potential—which could shed light on different, deeper structures inside Earth—must be done by integrating the measurements of gravimeters at different points around Earth’s surface or by measuring the orbits of satellites. Metrologists and geodesists are now teaming up to understand what optical-lattice clocks will be able to offer. It’s possible that they could be used at different points around Earth to assist with oil detection, earthquake monitoring, and volcano prediction.
In the meantime, there is still work to be done to keep improving the stability and accuracy of OLCs. Recently, a large effort has been made to fight the effect of black-body radiation. This radiation is unavoidably emitted by any physical body with nonzero temperature, including the vacuum chamber that surrounds the clock atoms. When it interacts with the atoms it shifts the energy levels of the clock transition. This shift can be corrected after the fact, but a precise knowledge of the temperature and emissivity of the vacuum chamber must be acquired. It is also possible to enclose the atoms in a cryogenic environment or use an atomic species that is inherently less sensitive to black-body radiation, such as mercury, a route that our group is exploring.
Before the end of the decade, new generations of ultranarrow lasers are also likely to help push stabilities below 1 part in 1017 after a single second of data gathering. That will make it practical for us to achieve an accuracy below 10−18— more than 100 times the precision of cesium clocks. As OLCs become more accurate, the scope of applications will continue to expand.
Even if OLCs are wildly successful, we won’t abandon the cesium clock, which will remain more compact and less expensive to build. And in the future, OLCs may be supplanted by clocks of even higher frequencies that rely on energy transitions inside the atom’s nucleus instead of among the electrons in orbit around it. These nuclear transitions are mostly out of reach of current laser technology, although researchers are starting to explore them.
But before long we will see yet another time standard that could significantly influence the way we relate to our universe. Just as surely as time keeps on ticking, improvements in our ability to measure it will go on.
This article originally appeared in print as “An Even Better Atomic Clock.”
courtsy : IEEE SPECTRUM

TrueNorth Chip : News ComputingHardware How IBM Got Brainlike Efficiency From the TrueNorth Chip

TrueNorth takes a big step toward using the brain’s architecture to reduce computing’s power consumption




Neuromorphic computer chips meant to mimic the neural network architecture of biological brains have generally fallen short of their wetware counterparts in efficiency—a crucial factor that has limited practical applications for such chips. That could be changing. At a power density of just 20 milliwatts per square centimeter, IBM’s new brain-inspired chip comes tantalizingly close to such wetware efficiency. The hope is that it could bring brainlike intelligence to the sensors of smartphones, smart cars, and—if IBM has its way—everything else.
The latest IBM neurosynaptic computer chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses conveying signals between the digital neurons. Each of the chip’s 4,096 neurosynaptic cores includes the entire computing package: memory, computation, and communication. Such architecture helps to bypass the bottleneck in traditional von Neumann computing, where program instructions and operation data cannot pass through the same route simultaneously.
“This is literally a supercomputer the size of a postage stamp, light like a feather, and low power like a hearing aid,” says Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing at IBM Research-Almaden, in San Jose, Calif.
Such chips can emulate the human brain’s ability to recognize different objects in real time; TrueNorth showed it could distinguish among pedestrians, bicyclists, cars, and trucks. IBM envisions its new chips working together with traditional computing devices as hybrid machines, providing a dose of brainlike intelligence. The chip’s architecture, developed together by IBM and Cornell University, was first detailed in August in the journalScience.

Friday, 29 August 2014

THE IEEE MEDAL OF HONOUR

The IEEE Medal of Honor was established in 1917 and is the highest IEEE award given to an engineer that has made extraordinary contributions in the fields of science and technology.

Thursday, 28 August 2014

ORGANOID BIOCHIPS

Scientists are currently developing “organoid biochips” that use living semiconductors to mimic the human body’s reactions. These chips could allow doctors to test what drugs might heal you fastest before doing trial and error on your body.

IEEE MEDAL OF HONOUR 2014

from the past weekene "2014 IEEE Honors Ceremony". The highest distinction awarded was the IEEE Medal of Honor. For almost 100 years this award has recognized the top engineers in their fields of interest. Pictured here is the first recipient in 1917 and this year’s recipient. Congrats to B. Jayant Baliga for joining such an esteemed group of engineers!

Saturday, 9 August 2014

IBM ACHIEVES A NEW MILESTONE

IBM has built a computer chip called the Neurosynaptic System. It contains 5.4 billion transistors and uses just 70 milliwatts of power (1/10,000th the power of most microprocessors). http://bit.ly/1q1wio1


Tuesday, 5 August 2014

DOWNLOAD 43TB IN 1 SECOND !!!

World data transfer record back in Danish hands

Researchers at  DENMARK TECHNICAL UNIVERSITY (DTU ) Fotonik have reclaimed the world data transfer record.

 

The world champions in data transmission are to be found in Lynbgy, where the High-Speed Optical Communications (HSOC) team at DTU Fotonik has just secured yet another world record. This time, the team has eclipsed the record that was set by researchers at the Karlsruhe Institut für Technologie, by proving that it is possible to transfer fully 43 terabits per second with just a single laser in the transmitter. This is an appreciable improvement on the German team’s previous record of 26 terabits per second.
The worldwide competition in data speed is contributing to developing the technology intended to accommodate the immense growth of data traffic on the internet, which is estimated to be growing by 40–50 per cent annually. What is more, emissions linked to the total energy consumption of the internet as a whole currently correspond to more than two per cent of the global man-made carbon emissions—which puts the internet on a par with the transport industry (aircraft, shipping etc.). However, these other industries are not growing by 40 per cent a year. It is therefore essential to identify solutions for the internet that make significant reductions in energy consumption while simultaneously expanding the bandwidth. This is precisely what the DTU team has demonstrated with its latest world record. DTU researchers have previously helped achieve the highest combined data transmission speed in the world—an incredible 1 petabit per second—although this involved using hundreds of lasers.
The researchers achieved their latest record by using a new type of optical fibre borrowed from the Japanese telecoms giant NTT. This type of fibre contains seven cores (glass threads) instead of the single core used in standard fibres, which makes it possible to transfer even more data. Despite the fact that it comprises seven cores, the new fibre does not take up any more space than the standard version.
The researchers’ record result has been verified and presented in what is known as a ‘post deadline paper’ at the CLEO 2014 international conference.
The High-Speed Optical Communications team at DTU Fotonik has held the world record in data transmission on numerous occasions. Back in 2009, these researchers were the first in the world to break the ‘terabit barrier’, which was considered an almost insurmountable challenge at that time, when they succeeded in transmitting more than 1 terabit per second—again using just a single laser. The benchmark has now been raised to 43 Tbit/s.

SEE MORE IN OUR FACEBOOK PAGE TOO:
https://www.facebook.com/WGKERCofficial


Electronics Research Corporation: FUN WITH ELECTRONICS !!!

Electronics Research Corporation: FUN WITH ELECTRONICS !!!: SOME BEUTIFUL CREATIONS WITH ELECTRONIC COMPONENTS      MR.RESISTOR !!!     MULTIGYM !!!     LET...

Monday, 4 August 2014

FUN WITH ELECTRONICS !!!

SOME BEUTIFUL CREATIONS WITH ELECTRONIC COMPONENTS


     MR.RESISTOR !!!


    MULTIGYM !!!


    LETZ COOK SOME RESISTORS !!!





Sunday, 3 August 2014

Happy 100th Anniversary to the National Electrical Safety Code (NESC)!

Happy 100th Anniversary to the National Electrical Safety Code (NESC)! From the Edison light bulb and Tesla coil to the electric vehicle, see the IEEE Standards Association infographic on the evolution of electricity during the past 100+ years: http://bit.ly/1kpaIha

Electronics Research Corporation: MISSION WITH A VISION

Electronics Research Corporation: MISSION WITH A VISION:                          

MISSION WITH A VISION


                           

FIRST DAY

https://www.facebook.com/WGKERCofficial

starting from today...letz start from our facebook page

The intention behind our venture is to link all electronics enthusiasts

2013 ELECTRONICS © Original & Official Page █║▌│█│║▌║││█║▌║▌║

DISCUSSIONS IN THIS PAGE & BLOG ARE BASED ON ELECTRONICS.
* CIRCUIT DIAGRAMS
* LATEST INVENTIONS
* PCB DESIGNS
* NEW IDEAS
*SUGESTIONS