GM Plants with a different storyline

chestnutAnother Scientific American article (William Powell, March 2014, pp54-57) inspired me to write this post, which details a story of genetic engineering a plant, but not a crop, but a tree!  First, a quick background of why….

Apparently, in 1876 an unfortunate situation developed following importation of chestnut seeds from Japan, it turned out that these seeds were contaminated with spores from a fungus (Cryphonectria parasitica) to which American Chestnuts were highly sensitive, but the Japanese chestnut1Chestnuts were immune.  This fungus effectively strangles the tree through growth of its mycelial fans, which produce oxalic acid that destroys the bark of the tree while allowing growth of the fungus.  It is this dead wood, produced by the action of the oxalic acid, that leads to strangulation of the tree as it tightens its grip on the trunk of the tree.  only 50 years after the initial import of this deadly fungus more than 3 billion trees were dead!

A programme of research was initiated to produce hybrid trees by crossing Chinese variants, which are also resistant to the fungus, with American trees to produce a hardy hybrid, but this work will take many years.  Therefore, in parallel a project was initiated to make use of, what at the time was a novel approach, genetic engineering of the plant.  As is often the case in science this idea was built around a fortunate coincidence in which a group had isolated a wheat gene for oxalate oxidase, and introduced this gene into other plants using a well described engineering system called Agrobacterium.  This enzyme was, of course, ideal for the proposed project as it breaks down oxalic acid the primary cause of the blight.  In addition, they had available genes that produce antimicrobial peptides (AMPs) that disrupt C. parasitica infection and, as time passed, genome sequencing projects have pointed to the genes in Chinese Chestnut trees that are responsible for resistance to the fungus.  The future looks promising for genetically engineering the tree instead of depending upon hybrids.

Ti PlasmidThe use of the soil bacterium Agrobacterium tumefaciens is an interesting story in itself and a subject I enjoyed teaching about as a perfect example of using a natural system for advanced technology.  The odd thing about this bacterium is that is has the ability to infect plants with its own DNA that makes the plant produce unusual amino acids, which it cannot synthesise itself.  the result of this infection of foreign DNA is that the plant develops small tumours, but the bacterium benefits from the availability of these biomolecules.  Genetic engineers were able to manipulate this system so that they could insert “foreign” DNA into the bacterial plasmid, in place of the tumour-forming components, and enable the bacterium to transfer this foreign DNA into a wide variety of plants in a stable a predictable manner.  Eventually, the research group were able to develop the mechanisms for tissue culture of the genetically altered plant cells and a model system based on poplar trees was available to initiate the experimental approach to overcoming the blight infection.

There are now more than 1,000 transgenic Chestnut trees growing in field sites, public acceptance of this approach to restoring a small piece of biodiversity is good and the future holds a promising approach for further such experimentation.  My own view is that this is a piece of genetic engineering that all sounds very good and very promising for the future.  My only caution, also expressed by the researchers is the spread of the genetically modified seeds, which may help remaining trees recover from infection, may also lead to cross pollination with closely related plants.  However, there are few trees closely related to the American Chestnut, so this seems unlikely.   A good story that supports genetic engineering in plants!

 

A model organism – the virtual bacteria.

stanfordmycoplasmagenitaliumI was reading an article in Scientific American today that got me thinking about the complexities of biology – the article described the production of a virtual bacteria using computing to model all of the known functions of a simple single cell.  The article was a very compelling read and presented a rational argument of how this could be achieved, based on modelling of a single bacterium that would eventually divide.  The benefits of a successful approach to achieving this situation are immense for both healthcare and drug development, but the difficulties are equally immense.

In order to simplify the problem the organism chosen to be modelled is a bacterium with the smallest genome size – Mycoplasma genitalium a bacteria that has only 500+ genes all of which have been sequenced.  This bacterium is also medically important, which adds weight to the usefulness of a virtual organism.  The problem of programming the computer was divided into modules, each of which will describe a key process of the cell all of which could feedback in a way that described actual binding coefficients for protein-protein and protein-DNA interactions.

As I read the article, I began to realise that there were some simple problems associated with the description of how the computer would “manage” the cell and when the author described doubling protein concentrations prior to cell division I knew there were problems with their model – this simplistic approach is not what happens in the cell – cellular control is an important aspect of this modelling and must be correct if the cell is to be realistic.  I can illustrate what I mean with one example – plasmid replication.  A plasmid in an autonomous circle of DNA, often found in bacteria, that are easy to understand and ubiquitous in nature.

Replication of plasmid DNA:

ReplicationThe number of copies of a plasmid are tightly controlled in a bacterial cell and this control is usually governed by an upper limit to an encoded RNA or protein, when the concentration of this product drops, such as AFTER division of the cell, replication will occur and the number of plasmids will increase until the correct plasmid number is attained (indicated by the concentration of the controlling gene product).

This is a classic example of cellular control and is a lot different to a model based on doubling protein concentration in anticipation of cellular division.

Mycoplasma genetics and the complexity of biology:

This whole problem also got me thinking about my own subject area and my brief excursion into the study of restriction enzymes from Mycoplasma. Restriction systems are a means for bacteria to protect themselves from viral invasion and, despite the small size of the genomes of Mycoplasma bacterial strains, they encode such systems.  There is clear evidence that such systems are “selfish” and may be they are fundamental to the long term survival of bacteria, so I think they need to be dealt with in the model organism.  However, things begin to get complicated when you look at the well described system from Mycoplasma pulmonis (a slightly more complex version of the organism used for the model).  Instead of a single set of genes for a restriction system, as usually found in other organism, the restriction genes of Mycoplasma pulmonis  are capable of switching in a way that can generate four different proteins from a single gene.  This is where the complexity of modelling an organism occurs and while the organism used may have a simple genome, it is important to know of how even simple organisms can increase their genetic complexity without increasing their DNA content.

Conclusion:

I think the work at Stanford is both interesting and important and I think they have achieved a very important step along the road to modelling a living cell, but I also think they may need more information and have more complex modules available to them as they try to be more accurate with even these simple organisms.  It will be a long road before we have a model of a human cell, but what an incredible thought that would be!

 

Proteins, Peptides and Amyloids – Alzheimer’s Disease

if you have read my recent science blogs you will be aware that I have an interest in Alzheimer’s Disease based on work involving protein aggregation.  A recent article by Bhattacharjee and Bhattacharyya (Journal of Biological Chemistry, 2013. 288(42): p. 30559-30570) brought back a result obtained in my lab many years ago and got me thinking about how small peptides can affect protein aggregation.

So, first the unexpected result from many years ago:

Marker proteinAt the time we were studying a small peptide called Stp, which was able to switch off a complex restriction and modification enzyme called EcoprrI, but the researcher carrying out this work made an unexpected observation that Stp peptide, when added to a group of proteins of different sizes (used as size markers) altered their apparent size and even aggregated many of the proteins (you can see this aggregation in the wells at the top of the gel).  This peptide was found to be able to inhibit certain protein-protein interactions (later we realised that is how it prevented the restriction enzyme from working), but clearly it could also affect the behaviour of other proteins in a gel.  The effect was primarily aggregation, but the result made me think at the time that maybe amphipathic peptides might also influence, even disrupt, protein-protein interactions.  We had just observed that the EcoR124I restriction enzyme could dissociate as a means of controlling function and I wondered if Stp would enhance that dissociation – low and behold Stp did indeed disrupt the subunit assembly of EcoR124I, and EcoprrI; we had demonstrated how the anti-restriction activity of this small peptide worked.

And so, secondly, to the recent observations with amyloidosis:

AmyloidAlzheimer’s Disease is initiated by protein aggregation when β-amyloid (Aβ) peptide oligomerisation into fibril structures that eventually form plaques within the brain.  Disruption of these aggregates would be a very important treatment for Alzheimer’s and is an area of intensive research.  What Bhattacharjee and Bhattacharyya have shown is that a small peptide, found in Russell’s viper venom, not only destabilise the Amyloids, but is also stable in blood for up to 24 hours.  This is a very interesting and promising observation that should stimulate the study of the effect of peptides on protein-protein interactions and perhaps lead to a non-toxic version of the peptide that could be used to treat Alzheimer’s.

Sometimes, it is very interesting how one piece of science can stimulate interest in another, as illustrated above, but also shows how diverse areas of research can sometimes be linked.  Great ideas are not always the result of hard work, but more often arise from interactions between different researchers – keep collaborating people!

Update – Nov. 2015:

In the latest issue of Scientific American under the heading “Advances” they report an article in Nature from work at UCL where autopsies of several patients who died from CJD (the human version of “mad cow disease”, which they acquired from infected growth hormone treatment) where they found evidence of amyloid formation associated with Alzheimer’s – at too early an age for natural onset.  Further work suggests that amyloid precursors, or small clumps of the beta-amyloid, may act in the same manner as prions do in the onset of CJD and lead to Alzheimer’s disease.  It would seem to me that the time is now ripe to begin a serious study of protein miss-folding, aggregation and conformational changes that may trigger these disorders.

Synthetic Biology – will it work?

Eng_Future_Logo_OutlinesEvery now and then science comes up with a new approach to research that impacts on technology, but often these approaches are controversial and the headlines we see are far from the truth and can damage the investment into the new techniques.  One good example is the Genetic Modification of plants and the production of GM-foods, which has a really bad press in Europe despite many obvious benefits for the world economy and for disease control.  The latest technology, which follows from the explosion in genetic engineering techniques during the 1990s, builds on concepts developed in bionanotechnology and is known as Synthetic Biology.  But, what is Synthetic Biology?  Will it work?  And what are the dangers versus benefits of these developments?  Gardner and Hawkins (2013) have written a recent review about this subject, which made me think a blog on the subject was overdue.

My background in this area is two-fold:

  1. I was a part of a European Road-Mapping exercise, TESSY, that produced a description of what Synthetic Biology is and how it should be implemented/funded in Europe.
  2. I was also Project Coordinator for a European research project – BioNano Switch, funded by a scheme to support developments in Synthetic Biology, that aimed to produce a biosensor using an approach embedded in the concepts of Synthetic Biology.

So, what is Synthetic Biology?  I think the definition of this area of research needs to be clearly presented, something that was an important part of the TESSY project, as the term has become associated simply with the production of an artificial cell.  However, that is only one small aspect of the technology and the definition TESSY suggested is much broader:

Synthetic Biology aims to engineer and study biological systems that do not exist as such in nature, and use this approach for:

  • achieving better understanding of life processes,
  • generating and assembling functional modular components,
  • developing novel applications or processes.

syntheticBiologyThis is quite a wide definition and is best illustrated with a simple comparison – in electronic engineering there exists a blueprint (circuit diagram) that shows how components (resistors, capacitors etc.) can be fitted together in a guaranteed order to produce a guaranteed result (a device such as an amplifier).  The Synthetic Biology concept would be to have a collection of such components (DNA parts that include promoters, terminators, genes and control elements; cellular systems including artificial cells and genetically engineered bacteria capable of controlled gene expression; interfaces that can connect biological systems to the human interface for useful output).  This would mimic the electronic situation and provide a rapid mechanism for assembly of biological parts into useful devices in a reliable and predictable manner.  There are many examples of such concepts, but the best known is the Biobricks Foundadtion.  However, at the TESSY meeting I was keen to make it clear that there are fundamental problems with this concept, so what are the problems?

At its most simple concepts a Biobricks database would consists of a number of different types of DNA (promoters, are short DNA sequences that switch a gene on; terminators, are short DNA sequences that switch a gene off; control elements, are DNA sequences that control the promoter switching on or off a gene as required; genes, would be DNA sequences that produce Recombinant DNAbiotechnologically useful products; and cells, are the final package that enables the DNA to do its work and produce the required product), which sounds logical and quite simple.  However, biological systems are not as reliable as electronic systems and combinations of promoters and genes do not always work.  One of the major problems with protein production, using such artificial recombinant systems, is protein aggregation resulting in insoluble proteins that are non-functional.  In addition, there are many examples (usually unpublished) of combinations of Biobricks that do not work as expected, or if used in a different order also result in protein aggregation, none of which ever happens with electronic components.  The reasons are far from clear, but are closely related to the complexity of proteins and the need for them to operate in an aqueous environment.  My thoughts about how to deal with this situation is to have a large amount of metadata associated with any database of Biobricks, which includes information about failures or problems of protein production from specific combinations.  However, I am not aware of any such approach!

Synthetic CellThere are other aspects of Synthetic Biology that do not depend on Biobricks and one example is the artificial cell.  The ideal for such a system is a self-assembling package, capable of entrapping DNA, capable of replication and survival and able to produce useful biomaterials and significant steps have been made toward such a system.  However, one area of concern as such systems are developed, is containment – can we really be sure these artificial microbes will remain in a contained environment and not escape to interact with and possible change the natural bacterial population.  However, the power and capability of such a system should not be underestimated and the likely use in future medicine could be immense – simple examples would be as delivery systems for biomaterial that can activate cellular changes by targeting to the required cell and then switching on protein production (e.g. hormones).  This type of targeted medicine would be a major breakthrough during the later part of this century.

SEN25_BIO11Another type of Synthetic Biology involves the artificial assembly (possible self assembly) of biomaterials onto an artificial surface in an way that is unlikely to occur naturally, but provides a useful device – I see this as more like what a Biobricks project should be like – such a system is usually modular in nature and the bio-material would normally be produced using recombinant techniques.  The research project I mentioned earlier involved such a device and the outcome was a single molecule biosensor for detecting drug-target interactions at the limits of sensitivity.  The major issues we had with developing this device was the precise and accurate attachment of biomaterial, to a surface in such a way that they function normally.  However, overall the project was successful and shows that a Synthetic Biology approach has merits.

What are the benefits that Synthetic Biology can provide society?  Well, one advantage is a more systematic approach to biotechnology, which to date has tended to move forward at the whim of researchers in Academia or industry.  Assuming the problems, associated with protein production, mentioned above can be better understood then there could be a major boost in use of proteins for biotechnology.  In addition, Synthetic Biology techniques offer a unique opportunity for miniaturisation and mass production of biosensors that could massively improve medical diagnosis.  Finally, artificial cells have many future applications in medicine, if they can be produced in a reliable way and made to work as expected:

  1. They could provide insulin for diabetics.
  2. Be made to generate stem cell, which could be used in diseases such as Alzheimer’s and Huntingdon’s.
  3. They could deliver specific proteins, drugs and hormones to target locations.
  4. They could treat diseases that result from faulty enzyme production (e.g. Phenylketonuria).
  5. They could even be used to remove cholesterol from the blood stream.

However, there are always drawbacks and risks associated with any new scientific advance:switch%20off

  1. Containment of any artificial organism is the most obvious, but this enhanced by the possibility of using the organism to produce toxins that would allow its use as a biological weapon.
  2. The ability to follow a simple “circuit diagram” for protein production, combined with a readily available database of biological material, could enable a terrorist to design a lethal and unpredictable weapon much more complex and perhaps targeted than anything known to date.
  3. Inhibit research through a readily available collection of materials that prevent patent protection of inventions.  This could be complicated by the infringement of patents by foreign powers in a way that blocks conventional research investment.
  4. Problems associated with the introduction of novel nano-sized materials into the human body, including artificial cells, which may be toxic in the long term.

My own feeling is that we must provide rigorous containment and controls (many of which already exist), but allow Synthetic Biology to develop,  Perhaps there should be a review of the situation by the end of this decade, but I hope that the risks do not materialise and that society can benefit from this work.

Protein aggregation being seen as important?

I have just come across a paper that summarises some of my views on what should be important research. Murphy and Roberts (Biotechnology Progress, 2013, 29(5): p. 1109-1115) have made the observation that for many years protein aggregation has been seen as a nuisance factor that prevents high yields in production of protein from recombinant sources, or is a hassle with certain purification methodologies.  However, the understanding of the mechanism behind prion misfolding, that leads to BSE and CJD diseases, and amyloid formation, that is involved in Alzheimer’s disease and other diseases, have shown that this subject should never have been ignored for so long.

Protein aggregationSo what is protein misfolding?  As the diagram shows, when a protein is first synthesised by the ribosome it immediately begins to fold as hydrophobic amino acids are incorporated (these small components of proteins that are strung together to form the protein can be either hydrophobic – dislike a water-based environment, hydrophilic – like to be surrounded by water, or neutral).  This hydrophobic core will fold to minimise contact between water and the hydrophobic amino acids.  However, the process is not simple and many other proteins and factors can be involved, including chaperone proteins that help the native protein to fold correctly, and when a protein is produced in very large amounts this folding system may go wrong and the protein may aggregate together as a simple means of avoiding exposure of the hydrophobic core.  Over-production of proteins from recombinant (genetically engineered) sources is a classic starting point for this problem and has for many years been a major issue for the biotechnology industry.  However, for the research scientist there has been a different problem associated with this problem – wasted research time that cannot be published!

I can illustrate this problem with a story from my own work.  I have worked with a multi-subunit protein for many years and a key step in this work was being able to over-produce one key protein (the largest) in milligram quantities.  200-401-385-MALTOSE-BINDING-PROTEIN-MBP-EPITOPE-TAG-Antibody-1-PTH-4x3Once we had found a mechanism to do this we were able to publish the work, but we always looked for a better, or easier, solution.  Interestingly, one such solution was to fuse another protein onto the required protein, which could be used to pull it out of the bulk mixture of all proteins, but you can attach the fusion protein to either end of the protein of interest and yet, when we swapped ends for the attachment the fusion was insoluble due to aggregation.  Yet, this negative result was never published as there is no journal that would encourage such a publication despite the information being really important to others working in the area.

Aggregating510The problem in science is that journals do not publish negative results even though such information can be very useful.  I would love to see some of the more enlightened journals review this policy and start to publish negative results. Maybe the renewed interest in protein aggregation, driven by the importance in disease states, might encourage this new approach.  I have a feeling that there is already a lot of data about the causes and reasons for protein aggregation that could help us understand amyloidosis and other protein-based diseases, but also, more importantly, a lot expertise that could benefit the research in this key area.

You are what you eat?

cover_2013-09The above title is an old quotation that has become something of a common concept as modern Western Society finds obesity to be a major issue.  But, it was reading Scientific American this month that led to me writing this blog…..

The September issue of Scientific America is titled “Special Food Issue” and reading some of the articles has made me think about the thorny subject of healthy eating.  There are a wide range of subjects covered ranging from Bees used to pollinate crops to genetically engineered food.  However, the article I found most interesting was aimed at addressing the issue of obesity, its causes and the complex issue of the calorific value of foods.

So, I will start with this issue of calories – what the article was keen to point out is that the use of calories to label food content is a little arbitrary and sometimes misleading.  There is no attempt to understand how easily any specific food is taken up by the body, how much energy the food consumption may itself involve, or whether gut bacteria are in fact using most or perhaps all of the available energy for their own existence!  While the latter point is a little extreme, it illustrates the problem.  The other important factor is that the calorie is used to label protein, fat, carbohydrate and other energy sources, but, in fact makes little allowance for their respective destiny in the body, which may or may not make them available for energy use.  It is in fact this last point that is really the crux of the problem.

Apparently, there are two theories as to the main cause of obesity in Western society:

  1. What I would call the IN-OUT theory, which is very simplistic and is that if you do not burn all of the calories you consume, you will put the excess calories down as fat and increase your body mass.
  2. The second theory is more complex and is known as The Hormone Control Theory, which suggests that the use of different energy sources and the subsequent storage of energy within the body is controlled by hormones (particularly insulin) and this in turn is triggered by sugar content in blood.insulin-glucose-metabolism

Interestingly, the first theory is the one that is currently the most popular and is used to guide government-based healthcare decisions.  In contrast, the second theory was more popular before the Second World War (WWII), but has become unfashionable – despite having never been properly tested and, therefore, disproved.

My own view is, as with most theories involving biological systems, that both will most likely be true and each will be important in certain 1_dcircumstances.  It is clear that the “couch potato” lifestyle is not healthy and almost certainly illustrates point 1 very clearly.  However, there are many examples of people with a very high body mass index, who have both changed their diet and their exercise regime, yet they fail to loose weight despite what is clearly a major increase in exercise levels.  The importance of insulin-glucose ratios in point 2 is critical and closely links the problem to diabetes and many other diseases.  The presence of glucose in the blood stream induces insulin release from the pancreas, which enables transport of the glucose into muscle and other cells, where it is converted to fatty acids for long term storage.  In diabetics, where insulin production may be non-existent (type 1 diabetes) then hyperglycaemia is quickly observed as glucose levels remain high, or increase.  Type%202%20DiabetesIn type 2 diabetes insulin is produced at lower levels than required, or is faulty and produces a reduced glucose uptake into cells.  However, insulin also has another role in metabolism and that is to regulate fat cells and to prevent release of stored fatty acids as an energy source (this is because insulin presence reflects the availability of blood glucose, which is a more efficient energy supply than fat).  If insulin levels are too high then fat storage will continue and weight gain will result.

It seems likely that highly processed foods contain carbohydrates that provide an exceptionally high glucose level to the blood and that many of the obesity problems of Western Society may reflect a sugar-rich diet rather than just a high calorie diet.  Exercise to burn off calories will have little effect in these circumstances (the body burns calories even without exercise) as insulin levels will remain high if blood sugar levels remain high.

Update (25/2/2015):
A recent paper by Haghighi et al 2015 (Polymorphic variant of MnSOD A16V and risk of diabetic retinopathy, Molecular Biology 49, 99-102) show that oxidation plays a key roles in the development of diabetic retinopathy (a side effect of Type 2 Diabetes that can led to blindness).  This is also an important observation with regard to food as many coloured foods (especially fruit) are rich in antioxidants and this could help reduce the type of effect described in the paper.

So, what does this mean about the food we eat?  Well, it is much more important to look at sugar levels and carbohydrate levels than just calorific value of a food.  A good example are breakfast cereals, where many common “healthy” cereals have sugar levels as high at 30% (30 g per 100g of cereal) and carbohydrate levels around 80%!  However, much lower levels are available with oats (porridge) and Shredded Wheat as examples.  low carbohydrate diets that are rich in protein are a good way to control weight by diet – proteins can be converted to glucose in the liver – but such a simple approach has other problems.  fruit%20&%20vegetables%20pie%20chart-851312We often hear about the “five fruits or vegetables per day”, and that has a great deal of importance in providing a varied diet, but fruits also have high sugar content!  There are two ways to look at this – one is that fructose in the blood does not induce insulin production, two fruits (especially highly coloured fruits) contain antioxidants that are very good for the body (preventing damage from the highly reactive species called radicals that are produced by sunlight.  But, not all is rosy as high fructose levels may induce insulin resistance, which leads to type 2 diabetes!  However, a recent British Medical Journal article, detailed in The Express newspaper, has indicated that eating fresh fruit such as blueberries can reduce the risk of Type 2 Diabetes by a third.  This fits with my own diet and I eat a mixture of fresh fruit every day and, as I have said in previous blogs, probably reflects the presence of antioxidants in coloured fruits.  Importantly, fruit juice was found to have a detrimental effect and increased blood sugar and insulin in most cases.  Vegetables are, of course, low in carbohydrates and will not lead to weight gain.  Bread, potato, pasta, rice and sweets are all a supply of sugar, while meats, eggs and fish are high protein food.

No one is perfect and sticking rigidly to a low carbohydrate diet can be difficult, but awareness always helps!  Try to pick a mixed and varied diet and always exercise as much as you can – even just vigorous walking is a good idea.

High-level equipment and its impact on science

A recent article (Hamrang, et al. Trends in Biotechnology, 2013) made me think about the impact modern technology is having on how scientific research is developing and, in particular, my own experience of applying some of this technology.  I thought it might be interesting to detail some of this technology and how it has influenced my own research and how it might both develop to provide new approaches for the advancement of science and how this will change requirements in teaching.

AFM1A good place to start is SINGLE MOLECULE ANALYSIS a concept I had never thought of in my early research career, but it became a possibility during the 1990s.  The first time I  heard of single molecule analysis was something called a Scanning Tunnelling Microscope, but I could not see uses for this device outside of chemistry as the objects to be visualised were in a vacuum.  However, this device quickly developed into the Atomic Force Microscope (AFM) and the study of biological molecules was soon underway.  This device measures surface topology and can visualise large proteins as single molecules – my first involvement was to visualise DNA molecules translocationthat were being manipulated by a molecular motor.  The resolution was astounding, but more importantly we were able to use this technology to study intermediates that had been biochemically “frozen” in position and resolve features we never expected to see.  Further studies allowed us to also study protein-protein interactions and super-molecular assembly of the motor.  The wonderful thing about this technology is that interpretation of the data has quickly moved from the negativeness of “artefacts” and a lack of faith that images showed what was thought to be there, to a situation where major advancements are possible through direct topology studies.  Developments of this technology are likely to include automatic cell identification, in vivo measurements using fine capillary needles and measurements of ligand-surface target interactions on cells – this could influence drug development and biomedical measurements.  Another developing technology related to AFM is the multiple tip biosensor that can sense minute amounts of material in a variety of situations (a “molecular sniffer” – one use I heard of directly from the developer was for wine tasting/testing!
Magnetic TweezerMy second single molecule analysis involved a Magnetic Tweezer setup which is able to visualise movement of a magnetic bead attached to a single molecule (in our case DNA), which allowed us to determine how a molecular motor moves DNA through the bound complex, but, perhaps more importantly, this led us to develop a biosensor based around this technology that could be used to determine drug-target interactions at the single molecule level and perhaps allow single molecule sensing in anti-cancer drug discovery.  This technology is also closely related to optical tweezer systems that have been used in similar studies and the future is certain to make such technology cheaper and easier to use and their application in biomedical research.  The key to this development will be the increased sensitivity of single molecule studies and how this will enable more detailed understanding of intermediate steps in molecular motion induced by biomolecules. I imagine as newer versions of these devices become more automated, then they will be used as biosensors to study more complex systems that involve molecular motion.  In the short term, it seems to me that there is scope for the application of these devices in understanding protein amyloid formation and stability with a view to determining mechanisms for destabilising such structures.

SURFACE ATTACHED BIOMOLECULAR ANALYSIS.
SPRThe best known system in this category of analytical devices is Biacore’s Surface Plasmon Resonance (SPR), which uses a mass detection mechanism based on changes to the Plasmon effect produced by electrons in a thin layer of gold. We have used this to study protein-DNA interactions and subunit assembly and the technique provides a useful confirmation of older techniques such as electrophoresis. I have been involved in discussions about the application of this technology in the field, but reliability and setup problems remain a problem. In comparison, the Farfield dual beam interferometer can use homemade chips that simplify setup and seems more reliable for similar measurements. Where I see a potential for these devices is in the study of protein aggregation, which has tremendous potential in the study of amyloid-based diseases. This idea sprang from discussions with Farfield about using their interferometer to detect crystallization and would be an interesting project. However, if these devices are to have a major impact in biomedical sciences, they need to be easier to setup, more reliable and smaller.  recent advances are leading SPR toward single molecule sensing (Punj, D., et al., Nat Nano, 2013). I believe the real key to implementing this technology as a biosensor is to incorporate two technologies in the same device. We proposed to have a dynamic system, on an interferometer chip, whose activity would switch off the interferometre when active. This could be used in drug discovery, targeting the drug at two systems simultaneously. If massively parallel systems can be developed, possibly based around laminar-flow, I can see a use in molecular detection of hazardous molecules using either antibodies, or aptamers.

COMPUTER BASED IMAGE ANALYSIS.
cyroEMI have not directly used this technology, but I have seen the results applied to the molecular motor that I have worked with. The value of the system is that cyroEM allows the gathering of many images of a large protein complex, which allows structural studies of systems that cannot be crystallized or visualized using NMR. My feeling is that as computing power increases this technique combined with molecular modelling in silico, will provide structural information for many complex biological systems. The impact of this knowledge will greatly influence the design of drugs and will aid the biochemical analysis of complex systems. My feeling is that further development of this technology will revolve around combining it with other techniques for visualising biomolecules, one I have mentioned before is Raman Spectroscopy, which could allow studies of these complexes in situ another could be single molecule fluorescence (Grohmann, et al. Current Opinion in Chemical Biology). I can easily imagine collaborative research projects that will bring a variety of such techniques to the production of the 3D image of real biological systems isolated from cells. Such research would have to follow existing models of bidding to use such equipment in centres of excellence. Such centres would bring together visualization techniques with single molecule analysis and data from genomics and proteomics. The research lab of the future will depend on much more international collaboration than we have seen up to now!

STUDIES USING NANOPORES.
The current technology in this area divides into two types of nanopores, physical holes in a surface and reconstituted biological pores. I have used a physical nanopore to investigate the separation of proteins from DNA using electrophoresis across the nanopore, the beauty of this system is that it also quantifies the number of molecules crossing the pore. I imagine that such devices will develop using surface attached biomolecules around the pore, which will introduce specificity into the device, but what I would have liked to develop is a dynamic device for ordered assembly of molecules (an artificial ribosome) where the nanopore allow separation of the assembly line and the drive components – such are the dreams of a retired scientist!
nanopore_x616[1]Biological nanopores are the main focus for single molecule sequencing of DNA and the future must be portable, personal sequencing devices (DNA sequencing information must reside with the source of the DNA and for humans this will eventually lead to personal devices. However, the level of available data will be enormous and the growth of the “omics” research will require new ways to store, organise and access this information. A new method for studying biological systems is already underway in which analysis of data allows a better understanding of complex systems. This will eventually become a part of biomedicine and will be supported by personalised medicine.

I was once asked by a student what future Biology holds, and I now know it will be an area of significant growth for many years to come, but this requires the right focus for investment and a new direction for undergraduates in their studies – good luck to those I have taught, who now have to lead these developments.