A model organism – the virtual bacteria.

stanfordmycoplasmagenitaliumI was reading an article in Scientific American today that got me thinking about the complexities of biology – the article described the production of a virtual bacteria using computing to model all of the known functions of a simple single cell.  The article was a very compelling read and presented a rational argument of how this could be achieved, based on modelling of a single bacterium that would eventually divide.  The benefits of a successful approach to achieving this situation are immense for both healthcare and drug development, but the difficulties are equally immense.

In order to simplify the problem the organism chosen to be modelled is a bacterium with the smallest genome size – Mycoplasma genitalium a bacteria that has only 500+ genes all of which have been sequenced.  This bacterium is also medically important, which adds weight to the usefulness of a virtual organism.  The problem of programming the computer was divided into modules, each of which will describe a key process of the cell all of which could feedback in a way that described actual binding coefficients for protein-protein and protein-DNA interactions.

As I read the article, I began to realise that there were some simple problems associated with the description of how the computer would “manage” the cell and when the author described doubling protein concentrations prior to cell division I knew there were problems with their model – this simplistic approach is not what happens in the cell – cellular control is an important aspect of this modelling and must be correct if the cell is to be realistic.  I can illustrate what I mean with one example – plasmid replication.  A plasmid in an autonomous circle of DNA, often found in bacteria, that are easy to understand and ubiquitous in nature.

Replication of plasmid DNA:

ReplicationThe number of copies of a plasmid are tightly controlled in a bacterial cell and this control is usually governed by an upper limit to an encoded RNA or protein, when the concentration of this product drops, such as AFTER division of the cell, replication will occur and the number of plasmids will increase until the correct plasmid number is attained (indicated by the concentration of the controlling gene product).

This is a classic example of cellular control and is a lot different to a model based on doubling protein concentration in anticipation of cellular division.

Mycoplasma genetics and the complexity of biology:

This whole problem also got me thinking about my own subject area and my brief excursion into the study of restriction enzymes from Mycoplasma. Restriction systems are a means for bacteria to protect themselves from viral invasion and, despite the small size of the genomes of Mycoplasma bacterial strains, they encode such systems.  There is clear evidence that such systems are “selfish” and may be they are fundamental to the long term survival of bacteria, so I think they need to be dealt with in the model organism.  However, things begin to get complicated when you look at the well described system from Mycoplasma pulmonis (a slightly more complex version of the organism used for the model).  Instead of a single set of genes for a restriction system, as usually found in other organism, the restriction genes of Mycoplasma pulmonis  are capable of switching in a way that can generate four different proteins from a single gene.  This is where the complexity of modelling an organism occurs and while the organism used may have a simple genome, it is important to know of how even simple organisms can increase their genetic complexity without increasing their DNA content.


I think the work at Stanford is both interesting and important and I think they have achieved a very important step along the road to modelling a living cell, but I also think they may need more information and have more complex modules available to them as they try to be more accurate with even these simple organisms.  It will be a long road before we have a model of a human cell, but what an incredible thought that would be!


Protein aggregation being seen as important?

I have just come across a paper that summarises some of my views on what should be important research. Murphy and Roberts (Biotechnology Progress, 2013, 29(5): p. 1109-1115) have made the observation that for many years protein aggregation has been seen as a nuisance factor that prevents high yields in production of protein from recombinant sources, or is a hassle with certain purification methodologies.  However, the understanding of the mechanism behind prion misfolding, that leads to BSE and CJD diseases, and amyloid formation, that is involved in Alzheimer’s disease and other diseases, have shown that this subject should never have been ignored for so long.

Protein aggregationSo what is protein misfolding?  As the diagram shows, when a protein is first synthesised by the ribosome it immediately begins to fold as hydrophobic amino acids are incorporated (these small components of proteins that are strung together to form the protein can be either hydrophobic – dislike a water-based environment, hydrophilic – like to be surrounded by water, or neutral).  This hydrophobic core will fold to minimise contact between water and the hydrophobic amino acids.  However, the process is not simple and many other proteins and factors can be involved, including chaperone proteins that help the native protein to fold correctly, and when a protein is produced in very large amounts this folding system may go wrong and the protein may aggregate together as a simple means of avoiding exposure of the hydrophobic core.  Over-production of proteins from recombinant (genetically engineered) sources is a classic starting point for this problem and has for many years been a major issue for the biotechnology industry.  However, for the research scientist there has been a different problem associated with this problem – wasted research time that cannot be published!

I can illustrate this problem with a story from my own work.  I have worked with a multi-subunit protein for many years and a key step in this work was being able to over-produce one key protein (the largest) in milligram quantities.  200-401-385-MALTOSE-BINDING-PROTEIN-MBP-EPITOPE-TAG-Antibody-1-PTH-4x3Once we had found a mechanism to do this we were able to publish the work, but we always looked for a better, or easier, solution.  Interestingly, one such solution was to fuse another protein onto the required protein, which could be used to pull it out of the bulk mixture of all proteins, but you can attach the fusion protein to either end of the protein of interest and yet, when we swapped ends for the attachment the fusion was insoluble due to aggregation.  Yet, this negative result was never published as there is no journal that would encourage such a publication despite the information being really important to others working in the area.

Aggregating510The problem in science is that journals do not publish negative results even though such information can be very useful.  I would love to see some of the more enlightened journals review this policy and start to publish negative results. Maybe the renewed interest in protein aggregation, driven by the importance in disease states, might encourage this new approach.  I have a feeling that there is already a lot of data about the causes and reasons for protein aggregation that could help us understand amyloidosis and other protein-based diseases, but also, more importantly, a lot expertise that could benefit the research in this key area.

High-level equipment and its impact on science

A recent article (Hamrang, et al. Trends in Biotechnology, 2013) made me think about the impact modern technology is having on how scientific research is developing and, in particular, my own experience of applying some of this technology.  I thought it might be interesting to detail some of this technology and how it has influenced my own research and how it might both develop to provide new approaches for the advancement of science and how this will change requirements in teaching.

AFM1A good place to start is SINGLE MOLECULE ANALYSIS a concept I had never thought of in my early research career, but it became a possibility during the 1990s.  The first time I  heard of single molecule analysis was something called a Scanning Tunnelling Microscope, but I could not see uses for this device outside of chemistry as the objects to be visualised were in a vacuum.  However, this device quickly developed into the Atomic Force Microscope (AFM) and the study of biological molecules was soon underway.  This device measures surface topology and can visualise large proteins as single molecules – my first involvement was to visualise DNA molecules translocationthat were being manipulated by a molecular motor.  The resolution was astounding, but more importantly we were able to use this technology to study intermediates that had been biochemically “frozen” in position and resolve features we never expected to see.  Further studies allowed us to also study protein-protein interactions and super-molecular assembly of the motor.  The wonderful thing about this technology is that interpretation of the data has quickly moved from the negativeness of “artefacts” and a lack of faith that images showed what was thought to be there, to a situation where major advancements are possible through direct topology studies.  Developments of this technology are likely to include automatic cell identification, in vivo measurements using fine capillary needles and measurements of ligand-surface target interactions on cells – this could influence drug development and biomedical measurements.  Another developing technology related to AFM is the multiple tip biosensor that can sense minute amounts of material in a variety of situations (a “molecular sniffer” – one use I heard of directly from the developer was for wine tasting/testing!
Magnetic TweezerMy second single molecule analysis involved a Magnetic Tweezer setup which is able to visualise movement of a magnetic bead attached to a single molecule (in our case DNA), which allowed us to determine how a molecular motor moves DNA through the bound complex, but, perhaps more importantly, this led us to develop a biosensor based around this technology that could be used to determine drug-target interactions at the single molecule level and perhaps allow single molecule sensing in anti-cancer drug discovery.  This technology is also closely related to optical tweezer systems that have been used in similar studies and the future is certain to make such technology cheaper and easier to use and their application in biomedical research.  The key to this development will be the increased sensitivity of single molecule studies and how this will enable more detailed understanding of intermediate steps in molecular motion induced by biomolecules. I imagine as newer versions of these devices become more automated, then they will be used as biosensors to study more complex systems that involve molecular motion.  In the short term, it seems to me that there is scope for the application of these devices in understanding protein amyloid formation and stability with a view to determining mechanisms for destabilising such structures.

SPRThe best known system in this category of analytical devices is Biacore’s Surface Plasmon Resonance (SPR), which uses a mass detection mechanism based on changes to the Plasmon effect produced by electrons in a thin layer of gold. We have used this to study protein-DNA interactions and subunit assembly and the technique provides a useful confirmation of older techniques such as electrophoresis. I have been involved in discussions about the application of this technology in the field, but reliability and setup problems remain a problem. In comparison, the Farfield dual beam interferometer can use homemade chips that simplify setup and seems more reliable for similar measurements. Where I see a potential for these devices is in the study of protein aggregation, which has tremendous potential in the study of amyloid-based diseases. This idea sprang from discussions with Farfield about using their interferometer to detect crystallization and would be an interesting project. However, if these devices are to have a major impact in biomedical sciences, they need to be easier to setup, more reliable and smaller.  recent advances are leading SPR toward single molecule sensing (Punj, D., et al., Nat Nano, 2013). I believe the real key to implementing this technology as a biosensor is to incorporate two technologies in the same device. We proposed to have a dynamic system, on an interferometer chip, whose activity would switch off the interferometre when active. This could be used in drug discovery, targeting the drug at two systems simultaneously. If massively parallel systems can be developed, possibly based around laminar-flow, I can see a use in molecular detection of hazardous molecules using either antibodies, or aptamers.

cyroEMI have not directly used this technology, but I have seen the results applied to the molecular motor that I have worked with. The value of the system is that cyroEM allows the gathering of many images of a large protein complex, which allows structural studies of systems that cannot be crystallized or visualized using NMR. My feeling is that as computing power increases this technique combined with molecular modelling in silico, will provide structural information for many complex biological systems. The impact of this knowledge will greatly influence the design of drugs and will aid the biochemical analysis of complex systems. My feeling is that further development of this technology will revolve around combining it with other techniques for visualising biomolecules, one I have mentioned before is Raman Spectroscopy, which could allow studies of these complexes in situ another could be single molecule fluorescence (Grohmann, et al. Current Opinion in Chemical Biology). I can easily imagine collaborative research projects that will bring a variety of such techniques to the production of the 3D image of real biological systems isolated from cells. Such research would have to follow existing models of bidding to use such equipment in centres of excellence. Such centres would bring together visualization techniques with single molecule analysis and data from genomics and proteomics. The research lab of the future will depend on much more international collaboration than we have seen up to now!

The current technology in this area divides into two types of nanopores, physical holes in a surface and reconstituted biological pores. I have used a physical nanopore to investigate the separation of proteins from DNA using electrophoresis across the nanopore, the beauty of this system is that it also quantifies the number of molecules crossing the pore. I imagine that such devices will develop using surface attached biomolecules around the pore, which will introduce specificity into the device, but what I would have liked to develop is a dynamic device for ordered assembly of molecules (an artificial ribosome) where the nanopore allow separation of the assembly line and the drive components – such are the dreams of a retired scientist!
nanopore_x616[1]Biological nanopores are the main focus for single molecule sequencing of DNA and the future must be portable, personal sequencing devices (DNA sequencing information must reside with the source of the DNA and for humans this will eventually lead to personal devices. However, the level of available data will be enormous and the growth of the “omics” research will require new ways to store, organise and access this information. A new method for studying biological systems is already underway in which analysis of data allows a better understanding of complex systems. This will eventually become a part of biomedicine and will be supported by personalised medicine.

I was once asked by a student what future Biology holds, and I now know it will be an area of significant growth for many years to come, but this requires the right focus for investment and a new direction for undergraduates in their studies – good luck to those I have taught, who now have to lead these developments.

Thoughts on single molecule science

Some while ago I worked with two groups who were using magnetic tweezers systems to study single molecule biophysics and I was always keen to expand that work to new projects F1_medium(unfortunately retirement moved such thoughts away from actuality).  A recent paper, using single molecule FRET analysis (Haller, A., et al. Proceedings of the National Academy of Sciences, 2013. 110(11): p. 4188-4193) has investigated the folding of an RNA aptamer.

This work encourages me to suggest further studies of aptamer folding using magnetic tweezer systems with a long-term view of developing a sensor for aptamer activity – it would be incredible to develop a toxin sensor, based on aptamers to specific toxins, that detects at the single molecule level!

Fig 10This is not such an over-ambitious project and relates closely to my own work, with a large European team, to develop a single molecule biosensor.  The basis of the three European projects, involved in this work, was to develop the magnetic tweezer device as an automatic electronic platform for detection of magnetic bead movement based on changes to the length of DNA molecules located within the device.  The concept was to use the device to identify drug-target interactions involving DNA-manipulating enzymes (such as helicases and topoisomerases) that are potential targets in anti-cancer drug development.

However, I always envisioned a greater potential for such a device and studying aptamers would be only one aspect of such a project.  The device could also be used to study Quadruplex formation, and drugs that affect this, other fibre formation, such as amyloids and even protein-protein interactions involving DNA binding enzymes to enable DNA-loop formation as an indicator of such interactions.  All of these uses provide a potential health-orientated development of the biosensor – just need someone to have the enthusiasm to write the proposal!

Memories of scientific discussions – Raman Spectroscopy

Three Bulls HeadsMany years ago, on many a Friday lunchtime, I used to sit in The Three Bulls Heads, Newcastle discussing applications of Physics within Biology with a Physicist who also worked at The University (if he reads this he will know who he is!).  Those chats were often quite complex, covering lots of difficult chemistry, physics and biology – goodness knows what the regulars thought of us!

One subject area that I was very interested in was the possible applications of Raman Spectroscopy within biology and that subject has interested me for many years; although, I have never had much opportunity to apply the technology to any of my research, which is a shame.  So, the question arises, for my blog readers, what is Raman Spectroscopy and has it found uses in biology?

Interestingly, the whole concept of Raman Spectroscopy relates to what has become known as the wave-particle duality, which was one of my favourite tutorial subject that I brought to my unsuspecting biology undergraduates.  It is really difficult to visualise how a particle can be a wave and vice-versa, but I did my best to try to explain this and this was the basis of my explanation…..

250px-Wave_particle_duality_p_known_svgIf you think of a wave as a string vibrating very quickly then it is easy to see that where the movement of the string is maximum then this region could be “seen” as a particle.  If the vibration frequency is changed several waves form and this can be “seen” as several particles.  This is a little simplistic, but is also interesting as quantum mechanics shows that particles such as electrons and atoms can be described by a wave equation (Schrödinger).  The major different is that we have to imagine three dimensions, which is very difficult, but if you could then you would understand how particles exits as waves – I am not sure how well this explanation works, but it is the best I have come up with.

Raman-ScatteringSo where does Raman scattering fit in to this?  Well, Raman described the possibility that photons, as particles, could impact each other, or with other particles such as electrons, and be scattered in an inelastic manner ( a sort of back scattering).  The interaction of the incident light with electrons used to bond different atoms in a molecule results in scattered light of different wavelengths (Raman Spectroscopy is the measurement of this light).

scanner-30A recent paper (Brauchle & Schenke-Layland. Biotechnology Journal, 2013. 8(3): p. 288-297) has reviewed the current applications of Raman Spectroscopy within biology and suggested future applications.  This has brought me up to date following these discussions so many years ago and I am pleased eo hear that some of the ideas we had are now feasible, through improved instrumentation. Specific signals can be measured from proteins, lipids, carbohydrates, DNA etc., which makes it possible to show changes and abnormalities in these components  and detection of infections or disease.  It has already been demonstrated that this non-invasive technology can be used to identify bacterial infection, tumour development and cell death.  Undoubtedly, this technology will improve and maybe one day all of those Star Trek fans will see a body scanner that is predictive about well-being – who knows, lol!

My research finally concludes…..

Tomorrow sees the viva for my last ever PhD student; although, I only ever supervised him for his first year.  I wish you well Luke and I am sure you will be fine.

However, as the title of this Blog indicates this really draws my research career to an end, which seems somewhat strange after all of these years.  I started working in the area of Type I Restriction-Modifcation systems way back in 1974, when I moved from Hull back to my birthplace – Newcastle upon Tyne – to work for Prof Stuart Glover.  But my own research really started when I moved to Portsmouth in 1988 (almost 25 years ago).  The direction changed gradually from the biochemistry of these enzymes, how they manipulate DNA and control the opposing activities of restriction and modification through to direct visualisation of active single molecules using Atomic Force Microscopy.

But, introducing these enzymes into a single molecule biosensing device was the most unexpected outcome of the work, which led me down a path toward nanotechnology and possible commercialisation – a long way from a study of something once described as “esoteric enzymes”.

Retirement is a pleasure and I don’t miss the stress of research, but I do miss the challenge of writing successful grant applications (along with all the required unsuccessful ones, lol).  My final contribution is a book describing Molecular Motors and their possible uses in Bionanotechnology.

Quantum computing

OK, I admit it – this business of quantum mechanics is far from my area of any expertise, but it has always fascinated me.  I have been dubious about the concept of quantum computing for a long time, it all seems a strange contradiction of terms to me!  However, a recent pair of papers in Nature (Hermelin, et al., (2011). Nature 477: 435-438; McNeil, et al., (2011). Nature 477: 439-442) suggest an interesting way forward:

In these papers a system is described for the creation of an acoustic wave that acts as a quantum channel for a single electron, transferring the electron between two distantly separated (several micrometres apart) receptors (quantum dots).  The single electron travels at a speed of 3 um/ns, which equates to 6.7 mph ( I believe my maths is correct).  these distances ensure no other means of tunnelling or transfer can occur and the transfer is reversible, for the same electron, and was demonstrated for >60 transfers over a cumulative distance of 0.25 mm.

Does this allow the construction of quantum computing devices that incorporate quantum dots separated over such distances?  Well, maybe!