Electronic computers are extremely powerful at performing a high number of operations sequentially at very high speeds. However, they struggle with combinatorial tasks that can be solved faster if many operations are performed in parallel for example in cryptography and mathematical optimisation, which require the computer to test a large number of different solutions. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: Quantum computation and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective.
Recently biocomputers are becoming feasible due to advancements in nanobiotechnology and Synthetic Biology. Biocomputers use systems of biologically derived molecules—such as DNA and proteins—to perform computational calculations. Compared to conventional computers, DNA used as a computing medium may prove to be a billion times more energy-efficient and to have a trillion times more data-storage capacity. (DNA stores information at a density of about 1 bit/ nm3, about a trillion times as efficient as videotape.)
The CRISPR gene-editing system is usually known for helping scientists treat genetic diseases, but the technology has a whole range of possible uses in synthetic biology too. Now researchers at ETH Zurich have used CRISPR to build functional biocomputers inside human cells
The economical benefit of biocomputers lies in potential of all biologically derived systems to self-replicate and self-assemble into functional components given appropriate conditions. For instance, all of the necessary proteins for a certain biochemical pathway, which could be modified to serve as a biocomputer, could be synthesized many times over inside a biological cell from a single DNA molecule. This DNA molecule could then be replicated many times over. This characteristic of biological molecules could make their production highly efficient and relatively inexpensive. Whereas electronic computers require manual production, biocomputers could be produced in large quantities from cultures without any additional machinery needed to assemble them.
Army is also looking for systems which are resistant to electronic attack and nuclear radiation. Army’s command and control, communications, computers, intelligence, sensors, and reconnaissance (C4ISR) capabilities, are especially vulnerable to the effects of radiation and extreme electromagnetic pulses associated with detonations of nuclear or other high-radiation weapons. Furthermore, the two key schemes used to protect them (high redundancy and Faraday isolation) add significant weight and increase power consumption. The development of biomolecular hybrid components may reduce this vulnerability to radiation extremes.
Originated by Leonard Adleman at the University of Southern California in 1994, DNA computation makes use of the encoding properties of strands of DNA subunits to compute the solution to problems.
DNA consists of two long chains of alternating phosphate and deoxyribose units twisted into a double helix and joined by hydrogen bonds between two pairs of nucleotides, adenine and thymine (A and T) or cytosine and guanine (C and G). In living organisms, each base pair bonds with its complement—A to T and C to G—in a sequence that determines the organism’s hereditary characteristics. To use DNA in a computation, one must first puzzle out which sequences reacting in which ways accurately replicate the algorithm in question and then custom make the single strands of DNA with the desired sequences, known as an oligonucleotides.
Researchers from The University of Manchester have shown that it is possible to build a new super-fast form of computer that “grows as it computes”.“Imagine a computer is searching a maze and comes to a choice point, one path leading left, the other right,” explained Professor King, from Manchester’s School of Computer Science. “Electronic computers need to choose which path to follow first. “But our new computer doesn’t need to choose, for it can replicate itself and follow both paths at the same time, thus finding the answer faster.
As DNA molecules are very small a desktop computer could potentially utilize more processors than all the electronic computers in the world combined – and therefore outperform the world’s current fastest supercomputer, while consuming a tiny fraction of its energy. DNA computing is also massively parallel. Researchers are currently trying to exploit these properties for several purposes, including solving NP-complete problems (mathematical problems whose answers cannot be checked in computer running time bounded by a polynomial solution), searching large databases, solving problems that require vast amounts of memory, and encrypting data.
Scientists used CRISPR to turn a cell into a Biological Computer
“The human body itself is a large computer,” says Martin Fussenegger, lead researcher of the study. “Its metabolism has drawn on the computing power of trillions of cells since time immemorial. And in contrast to a technical supercomputer, this large computer needs just a slice of bread for energy.”
Tapping into these natural processes to build logic circuits is a key goal of synthetic biology. In this case, the ETH Zurich team found a way to slot dual-core processors into human cells by first modifying the CRISPR gene-editing tool. Normally, this system uses guide RNA sequences to target specific DNA segments in the genome, then make precise edits. For this project though, the team created a special version of the Cas9 enzyme that can act as a processor.
This special Cas9 instead reads guide RNA as inputs, and in response expresses particular genes. That in turn creates certain proteins as the output. These processors act like digital half adders – essentially, they can compare two inputs or add two binary numbers, and deliver two outputs. To boost the computing power, the researchers managed to squeeze two processor cores into one cell.
In the long run, these dual-core cell computers could be stacked up by the billion to make powerful biocomputers for diagnosing and treating disease. For example, the team says they could look for biomarkers and respond by creating different therapeutic molecules, depending on whether one, the other or both biomarkers are present.
“Imagine a microtissue with billions of cells, each equipped with its own dual-core processor,” says Fussenegger. “Such ‘computational organs’ could theoretically attain computing power that far outstrips that of a digital supercomputer – and using just a fraction of the energy.” The research was published in the journal PNAS.
Scientists Construct Biocomputer Made From Living Human Cells
A team from ETH Zurich and the University of Basel are making headways on constructing biocomputers – those made from living cells – and a new paper, in Nature Methods, details their most advanced system to date.
Using nine different cell populations assembled into 3D cultures, the team of synthetic biologists has managed to get them to behave like a very simple electronic computational circuit. Take out the electrical wiring and signaling, and replace them with chemical inputs, and you’ve got a living computer that responds to incoming data and can process it using rudimentary logic gates AND, NOT, and OR.
This team had previously managed to get a couple of cells to perform basic addition tasks, but for this project, they made bespoke genetic programs for each of the nine individual human cell types involved in their biocomputer.
Disallowing them to respond to a wide array of biochemical signals, as they normally would, each was altered to execute just one, clearly defined computational instruction. This allowed the collective cells – the biocomputer – to perform “full-adder” calculations, which essentially means it can perform more detailed, interrelated sums simultaneously. By rearranging the cells, different types of calculations could be carried out.
There are wires here, in a manner of speaking, but unlike static copper ones, this system can “produce and sense chemical communication wires” to perform computational tasks. It’s a remarkable system, one that has the potential to adapt and evolve.
Biocomputation is a nascent research field. It’s incredibly difficult to engineer a system like this, because biological matter is far more intricate and mercurial than copper circuitry. Forget simple addition, though: If efforts like this continue, expect to see implantable, living computers within animals – those perfectly integrable with their own biology – in the near future. A 2009 paper spoke of a world in which these systems will be used to diagnose diseases and create “designer” cell functions within other lifeforms.
Bio Breakthrough: Scientists Unveil First Ever Biological Supercomputer
Researchers from the EU-funded ABACUS project have created a model biological supercomputer that is both sustainable and highly energy efficient. The model bio-supercomputer is powered by adenosine triphosphate (ATP), the substance that provides energy to all of the cells in a human body. The model is able to process information extremely quickly and accurately using parallel networks, in the same fashion that electronic supercomputers are able to process information.
Their biocomputer is a chip measuring about 1.5 cm square in which channels have been etched to create a specifically designed, nanostructured network explored by a large number of molecular-motor-driven, protein filaments. Instead of the electrons that are propelled by an electrical charge and move around within a traditional microchip, short strings of proteins (which the researchers call biological agents) travel around the circuit in a controlled way; their movements powered by adenosine triphosphate (ATP), protein strings, which the scientific community often refers to as the “molecular unit of currency.”
“In our network encoding of the SSP, the channel-guided unidirectional motions of agents are equivalent to elementary operations of addition, and their spatial positions in the network are equivalent to “running sums.” Starting from an entrance point at one corner of the network, agents are guided unidirectionally downward by the channels in vertical or diagonal directions. Two types of junctions were designed to regulate the motion of agents in the network: (i) “split junctions,” where agents are randomly distributed between two forward paths, and (ii) “pass junctions,” where agents are guided onward to the next junction along the initial direction.
“In simple terms, it involves the building of a labyrinth of nano-based channels that have specific traffic regulations for protein filaments. The solution in the labyrinth corresponds to the answer of a mathematical question, and many molecules can find their way through the labyrinth at the same time”, says Heiner Linke, director of NanoLund and coordinator of the parallel computer study. Notably, once the device is loaded with the required number of agents, the effective computational time for NP-complete problems grows only polynomially, e.g., as N2 if the elements of S are approximately equidistantly spaced. This is in contrast to traditional, sequentially operating, electronic computers, where the time required to explore every possible solution sequentially would scale exponentially as 2N.
According to the researchers, the biocomputer would require less than one percent of the energy consumed by an electronic transistor, making them much more sustainable than electronic supercomputers, which often require their own power plant to function. “It’s hard to say how soon it will be before we see a full scale bio super-computer. One option for dealing with larger and more complex problems may be to combine our device with a conventional computer to form a hybrid device,” Dan Nicolau from the department of bioengineering at Canada’s McGill University said in a statement. “Right now we’re working on a variety of ways to push the research further.”
Led by Yaniv Erlich, the team of engineers successfully stored and retrieved 214 pentabytes of data (214,000 gigabytes) into DNA. They took advantage of the structure of DNA molecules, which look like twisting ladders denoted by the letters A, C, G, and T. This genetic sequence typically acts as a building block for living things, and if one can convert it into binary numbers 0 and 1, DNA molecules can encode almost anything. Of course, the process is not that easy because not all DNA sequences are robust enough, said Erlich. What’s more, not all data stored in DNA can be retrieved successfully.
Calling their process a “DNA Fountain,” the researchers first compressed all the data into a single master archive and split it into short strings of binary digits, made up of ones and zeros. Next, the duo used an “erasure-correcting algorithm called fountain codes” to randomly packaged the strings into droplets. Each droplet contains a barcode in the sequence that helped the researchers reassembling the file.
The researchers then “mapped the ones and zeros in each droplet to the four nucleotide bases in DNA: A, G, C and T,” and ended up with a digital list of 72,000 DNA strands that contained the encoded data. This way, the DNA sequence can still be decoded even if a few codes get lost. If stored appropriately, DNA can last hundreds of thousands of years and save millions of data. “DNA won’t degrade over time like cassette tapes and CDs,” said Erlich. “[I]t won’t become obsolete.
Last year Researchers at Microsoft and the University of Washington reached an early but important milestone in DNA storage by storing a record 200 megabytes of data on the molecular strands. Microsoft stored more than 100 books and a music video in a DNA strand, occupying a spot in a test tube “much smaller than the tip of a pencil,” said Douglas Carmean. The DNA data writing involved the translation of data from 1s to 0s into letters of nucleotide bases of four basic DNA strand, translating the letters into molecules and returning them back.
“DNA is an amazing information storage molecule that encodes data about how a living system works. We’re repurposing that capacity to store digital data — pictures, videos, documents,” said Ceze, who is conducting research in the team’s Molecular Information Systems Lab (MISL), which is housed in a basement on the University of Washington campus. “This is one important example of the potential of borrowing from nature to build better computer systems.”
Demand for data storage is growing exponentially, but the capacity of existing storage media is not keeping up, says Microsoft. Most of the world’s data today is stored on magnetic and optical media. Despite improvements in optical discs, storing a zettabyte of data would still take many millions of units, and use significant physical space. If we are to preserve the world’s data, we need to seek significant advances in storage density and durability. Using DNA to archive data is an attractive possibility because it is extremely dense (up to about 1 exabyte per cubic millimeter) and durable (half-life of over 500 years).
While this is not practical yet due to the current state of DNA synthesis and sequencing, these technologies are improving quite rapidly with advances in the biotech industry. Given the impending limits of silicon technology (end of Moore’s Law), we believe hybrid silicon and biochemical systems are worth serious consideration. Biotechnology has benefitted tremendously from progress in silicon technology developed by the computer industry; now is the time for computer architects to consider incorporating biomolecules as an integral part of computer design.
Computers made of genetic material? Researchers conduct electricity using DNA-based nanowires
Scientists at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and Paderborn University are developing a technique of creating electronic circuits made from DNA material. Their strategy is called the “bottom-up” method, that aims to develop complex molecular structures through self-assembling processes. The current approach used by industry uses ‘top-down’ method that cuts Large portions from the base material until the desired structure is achieved has been able to achieve miniaturization upto fourteen nanometers , but soon predicted to reach to its limits.
The physicists conducted a current through gold-plated nanowires, which independently assembled themselves from single DNA strands. In order to produce the nanowires, the researchers combined a long single strand of genetic material with shorter DNA segments through the base pairs to form a stable double strand. Using this method, the structures independently take on the desired form.Their results have been published in the scientific journal Langmuir. “With the help of this approach, which resembles the Japanese paper folding technique origami and is therefore referred to as DNA-origami, we can create tiny patterns,” explains the HZDR researcher. “Extremely small circuits made of molecules and atoms are also conceivable here.”
There is, however, a problem: “Genetic matter doesn’t conduct a current particularly well,” points out Erbe. He and his colleagues have therefore placed gold-plated nanoparticles on the DNA wires using chemical bonds. Using a “top-down” method – electron beam lithography—they subsequently make contact with the individual wires electronically. “This connection between the substantially larger electrodes and the individual DNA structures have come up against technical difficulties until now. By combining the two methods, we can resolve this issue. We could thus very precisely determine the charge transport through individual wires for the first time,” adds Erbe.
DNA based programmable circuits
The only known molecules that can be pre-designed to self-assemble into complex miniature circuits, which could in turn be used in computers, are DNA molecules. But no one has yet been able to demonstrate reliably and quantitatively the flow of electrical current through long DNA molecules.
Now, the international group led by HU Prof. Danny Porath of the chemistry institute report reproducible and quantitative measurements of electricity flow through long molecules made of four DNA strands, signaling a significant breakthrough towards the development of DNA-based electrical circuits. The research was carried out in collaboration with groups from Denmark, Spain, the US, Italy and Cyprus. According to Prof. Porath, “This research paves the way for implementing DNA based programmable circuits for molecular electronics, which could lead to a new generation of computers.”
If we succeed in constructing logical switches from self-organizing molecules, then computers of the future will come from test-tubes,” Dr. Artur Erbe, physicist at the HZDR prophesizes. The enormous advantages of this new technology are obvious: billion-euro manufacturing plants that are necessary for manufacturing today’s microelectronics could be a thing of the past. The advantages lie not only in production but also in operating the new molecular components, as they both will require very little energy. DNA computers have potential to do billions of operations at once, use less energy and significantly more capacity per unit than DNA computers.
The DNA of molecular electronics: nanoscale rectifiers
The study published in Nature Chemistry under the title “Molecular rectifier composed of DNA with high rectification ratio enabled by intercalation” details how the Israeli and American researchers leveraged the predictability, diversity and programmability of DNA to design their first single molecule electronic device. They constructed a DNA-based molecular rectifier by site-specific intercalation of small molecules (coralyne) into a custom-designed 11-base-pair DNA duplex.
Measuring current–voltage curves of the DNA–coralyne molecular junction exhibited unexpectedly large rectification with a rectification ratio of about 15 at 1.1 V, a counter-intuitive finding considering the seemingly symmetrical molecular structure of the junction.”Creating and characterizing the world’s smallest diode is a significant milestone in the development of molecular electronic devices,” stated Dr. Yoni Dubi, a researcher in the BGU Department of Chemistry and Ilse Katz Institute for Nanoscale Science and Technology. “It gives us new insights into the electronic transport mechanism.”
The nanoscale diode thus obtained operates like a valve to facilitate electronic current flow in one direction. A collection of these nanoscale diodes, or molecules, has properties that resemble traditional electronic components such as a wire, transistor or rectifier.
Gene circuits in live cells can perform complex computations
Living cells are capable of performing complex computations on the environmental signals they encounter. These computations involve both analogue- and digital-like processing of signals to give rise to complex developmental programs, context-dependent behaviours and homeostatic activities. In contrast to natural biological systems, synthetic biological systems have largely focused on either digital or analogue computation separately. But now a team of researchers at MIT has developed a technique to integrate both analogue and digital computation in living cells, allowing them to form gene circuits capable of carrying out complex processing operations.
Analogue and digital computation each have distinct advantages for cellular computing. Digital computation in synthetic and natural biological systems is useful for signal integration given its relative robustness to noise and is exemplified by decision-making circuits, such as those in developmental programs that lead cells into differentiated states. Analogue computation is useful for signal processing in synthetic or natural biological systems when the output needs to be dependent on graded information or continuous functions of the inputs, such as the sum or ratio of energy sources or signalling molecules. However, analogue signal integration is susceptible to noise, making it challenging to design robust synthetic genetic programs. Here we combine the benefits of analogue signal processing with digital signal integration to create artificial mixed-signal gene networks that carry out new hybrid functions in living cells.
The mixed signal device the researchers have developed is based on multiple elements. A threshold module consists of a sensor that detects analogue levels of a particular chemical. This threshold module controls the expression of the second component, a recombinase gene, which can in turn switch on or off a segment of DNA by inverting it, thereby converting it into a digital output. If the concentration of the chemical reaches a certain level, the threshold module expresses the recombinase gene, causing it to flip the DNA segment. This DNA segment itself contains a gene or gene-regulatory element that then alters the expression of a desired output.
“Most of the work in synthetic biology has focused on the digital approach, because [digital systems] are much easier to program,” said Timothy Lu, an associate professor of electrical engineering and computer science and of biological engineering, and head of the Synthetic Biology Group at MIT’s Research Laboratory of Electronics. “Digital is basically a way of computing in which you get intelligence out of very simple parts, because each part only does a very simple thing, but when you put them all together you get something that is very smart,” Lu says. “But that requires you to be able to put many of these parts together, and the challenge in biology, at least currently, is that you can’t assemble billions of transistors like you can on a piece of silicon,” he says.
The team has already built an analogue-to-digital converter circuit that implements ternary logic, a device that will only switch on in response to either a high or low concentration range of an input, and which is capable of producing two different outputs. “So this is how we take an analogue input, such as a concentration of a chemical, and convert it into a 0 or 1 signal,” Lu says. “And once that is done, and you have a piece of DNA that can be flipped upside down, then you can put together any of those pieces of DNA to perform digital computing,” he says.
In the future, the circuit could be used to detect glucose levels in the blood and respond in one of three ways depending on the concentration, he says. “If the glucose level was too high you might want your cells to produce insulin, if the glucose was too low you might want them to make glucagon, and if it was in the middle you wouldn’t want them to do anything,” he says. Similar analogue-to-digital converter circuits could also be used to detect a variety of chemicals, simply by changing the sensor, Lu says.
Immune cells used in cancer treatment could also be engineered to detect different environmental inputs, such as oxygen or tumor lysis levels, and vary their therapeutic activity in response. Other research groups are also interested in using the devices for environmental applications, such as engineering cells that detect concentrations of water pollutants, Lu says.
Signal-processing circuits composed of genetic comparators
Comparators with different thresholds can be composed together to build more complex signal-processing circuits in living cells. For example, circuits that turn gene expression ON with increasing input concentrations can be considered high-pass circuits (since they allow high-concentration inputs to ‘pass’ or be outputted). Next, to create low-pass circuits (which only allow low-concentration inputs to ‘pass’), we built a gene expression cassette that was ON in the basal state and used an inducible recombinase circuit to turn the output gene OFF by inverting the upstream promoter. Then, to create band-pass filters, we combined a low-threshold high-pass circuit with either a medium- or high-threshold low-pass circuit.
Ahmad Khalil, an assistant professor of biomedical engineering at Boston University, who was not involved in the work, says the researchers have expanded the repertoire of computation in cells. “Developing these foundational tools and computational primitives is important as researchers try to build additional layers of sophistication for precisely controlling how cells interact with their environment,” Khalil says.
The research team recently created a spinout company, called Synlogic, which is now attempting to use simple versions of the circuits to engineer probiotic bacteria that can treat diseases in the gut. The company hopes to begin clinical trials of these bacteria-based treatments within the next 12 months.
Bimolecular Computation – DARPA-NSF Consortium
In the United States, the consortium for “Biomolecular Computation” started under DARPA and NSF, and carried out various attempts to grip the computational power of molecules. Led by J. Reif at Duke University, the consortium started in 1997 and ended in September 2000.The code-name of the consortium is “Prototyping Biomolecular Computations”. It tried to exploit not only the high-performance computation with the massive parallelism of molecules, but also the energy-saving computation using reactions on nano-scale.
Among several application areas of molecular computation, the technology of `nano-fabrication’ is becoming an active research area. It is considered a part of nano-technology. Since DNA is the popular molecular tool, people call it `DNA nano-technology’. Its research result is applied not only to nano-fabrication but to nano-machine, i.e. the study on mechanistic machines on nano-scale. Another promising application area of molecular computation is the genomic analysis such as DNA fingerprinting. In the consortium, a simple sensing technique such as DNA chip is enhanced with more intelligent measuring technology using molecular computation. Such biotechnical application of molecular computation is now called `computationally inspired biotechnology’.
Researchers have demonstrated many applications of Biomolecular Computing like devopement of exquisite methods for detection of organic molecule with extremely low concentration, of hiding or encrypting DNA data, and neural system for recognizing 2D images.