Genetics and Type 2 Diabetes

Diabetes-with-family-history-prevalence-city-20592 When I first diagnosed with Type 2 Diabetes (TD2) I immediately started to research the scientific literature for any clear genetic explanation of this highly prevalent disease, but I was unable to find any clear link between a specific genetic loci and occurrence of the disease; although, there were several papers making such links they were far from proving any such link.  However, a recent article in Scientific American (October 2015, pp56-59) has suggested one possible explanation of the growth of TD2 and a genetic cause that predates the evolution of Homo Sapiens!  Reflecting on this article I can understand how I would not have come across this explanation as the research has always been linked to a different disorder – Gout, or the “disease of Kings”.

Gout is caused by a build up of uric acid in the bloodstream, which can then crystallise in capillary vessels leading to immense pain.  Uric acid is swiftly removed from most animals through breakdown by an enzyme called Uricase, but humans and many primates lack a functional form of the gene responsible for production of this enzyme.  Apparently, the loss of function of this gene occurred some 15+ million years ago when a series of nonsense mutations inactivated the gene (Oda et al. Mol Biol Evol 2002;19:640–53).  The article proposes that the selective pressure for the loss of Uricase activity begins when apes moved from Africa to Europe, which at first provided a plentiful environment with a sub-tropical climate providing bountiful supplies of fruit for their diet (particularly figs).


However, this period saw the beginning of climate cooling and this drier cooler weather changed the European vegetation from a rich broadleaf forested area toward a savanna-like environment, with much less fruit available and much of this fruit (especially figs)now becoming seasonal and quite scarce during winter.  As cooling continued these European apes began to starve, therefore, the loss of the Uricase gene must have provided a selective advantage (Hayashi et al. Cell Biochem Biophys 2000;32:123–9).  The normal mammalian reaction to periods of starvation is to produce fat (e.g. for an energy supply during hibernation, or to provide sufficient energy to survive winters).  However, during prolonged periods of starvation foraging for food must continue, especially for primates that do not hibernate, and for this to be successful glucose is required by the brain.  This is achieved by an “insulin-resistance” effect.   The clue to this selective advantage lies with the fruit-rich diet that the apes in both Europe and Africa were consuming – digestion of fructose leads to production of uric acid and researchers have found that uric acid can trigger this switch to “insulin-resistance”.insulin-resistance The researchers proposal is that the loss of the Uricase gene led to a gradual development of the ability to switch to converting fructose to fat providing a better chance to survive food shortages during winter.  They also propose that these European apes may have brought this major selective advantage back to Africa as they migrated back to avoid cooling winters, they must have out-competed African apes and thus left the mutated Uricase gene that has been acquired by humans.

If this explanation of these genetic events is correct, we have a genetic explanation of TD2 – sometimes known as insulin-resistance – and what we have now is that processed foods, which often contain corn syrup, or table sugar, that are extremely rich in fructose, are being turned into fat because of the elevated uric acid levels in our bloodstream.  It would be exciting to think that new drugs could be developed against uric acid production, which might help reduce obesity and TD2.  Genetic Engineering may even hold the possibility of restoring Uricase production in the distant future.  In the meantime, as I have said before we must aim to increase regular exercise, reduce sugar intake and aim to make fresh fruit our only supply of fructose.  The antioxidants available in fresh fruit help to reduce many side effects of excess uric acid and reduce multiple diseases.

However, from a personal viewpoint I am left with something of a mystery as this genetic explanation does not explain familial occurrences of TD2, something I have personal experience of!  The best link between TD2 occurrences in families and an observed disorder is that TD2 is tightly linked to β-cell dysfunction in the pancreas (O’Rahilly, S.P. et al.  The Lancet , Volume 328 , Issue 8503 , 360 – 364), which is associated with insulin resistance (Kahn, 2003.  Diabetologia 46, 3-19), but the nature of this genetic link is complex and confused and involves amyloidosis of insulin.  A detailed description of this will follow.


Science, Discovery, Drugs and The Pharma

pharmaHaving spent all of my working life in University, I am very aware that my view of The Pharma industry is biased and probably poorly informed. However, I believe that there always should be “space” for an outsider’s viewpoint and scope for questions from an ex-researcher about how to approach scientific discovery. To some extent my views have been influenced by the recent takeover attempt by Pfizer of Astra-Zeneca (a company with whom I have had some contact), but my ideas have also been formed around “stories” and discussions with people who have worked in a wide variety of companies, small and large, involved in drug development.

Drug development cartoonDrug development is a very slow process and can take decades from “potential” to production, but the rate of development of new drugs has dropped in a very significant way over recent years. This is an unexpected situation for a research area that has also contributed many new techniques. New chemistry methodologies have progressed through novel, rapid techniques such as combinational chemistry, in which many different steps, in a variety of chemical reactions, occur simultaneously over an array of substrates — through recombinant DNA techniques that allow the artificial construction of DNA sequences that produce novel products, to single molecule biochemistry that has developed over recent years (and in which I was slightly involved). Despite all of this new technology, the number of new, successful drugs reaching the marketplace continues to fall and in some cases this situation has grown critical (e.g. new antibiotics). The concept of personalized medicine, built around the knowledge of an individual’s genetic background, from the human genome sequencing project and cheap DNA sequencing is now available, but has also failed to drive the development of new drugs.

All of this must be set into the research background that leads to drug discovery and this quickly leads to the question “are the large Pharma companies the best system for unique research that might lead to drug discovery”? The views of the current President of the Royal Society, Paul Nurse, as expressed in a recent issue of The Observer (18th May, 2014) suggest that financial interests of the share holders far outweigh the “zest” for funding of research science and the Pfizer takeover bid for Astra-Zeneca provides a classic example of such a situation. Such worries are compounded by situations where, following a takeover, a company might suppress certain areas of research, or discoveries, in order to enable expansion, or sale, of their own products — I have heard of several such situations across a number of companies and it is unclear exactly how much “buried” science there is.

It has been a long-standing view, in the UK, amongst research-active academics, that “blue-sky research” should be independent of industrial, or profit-orientated funding and that such research is more likely to lead to important discoveries (including drugs) rather than profit-driven research that is often too focused on developments related to existing science policy within a company. However, there are many examples of the large Pharma funding research projects within universities and some of these projects have “blue sky” elements to the research direction, but my own feeling is that there are not enough of these types of projects (at least in the UK there are not), that they are difficult to set up and that the companies often try to influence the direction of the research for reasons that are not connected to the science.

Perhaps the best example is illustrated by the development of antibiotics (or more correctly by the lack of recent development of antibiotics). Although, during the 1950s, the widespread use of antibiotics had a major impact on health by reducing bacterial infection, reducing the risks from many pathogens to only a few cases, their use in large quantities for the treatment of animals led to the spread of surplus antibiotic in the environment and, subsequently,

to the development of antibiotic resistance in many soil bacteria (natural variation in populations of bacteria include some spontaneously resistant variants, which are selected from the population and allowed to grow in the presence of the antibiotic). Unfortunately, this resistance was able to spread quickly amongst a wide range of bacteria, Including many responsible for human ailments (bacteria often carry small, independent pieces of DNA called plasmids that can readily transfer from bacterial strain to bacterial strain and these plasmids were found to also carry the genes for antibiotic resistance). When this situation was combined with a habit amongst patients of not taking the full dose (prescription) of an antibiotic, which can lead to antibiotic resistant bacteria within a population, there is a real problem with regard to treatment of infection. However, antibiotics are not a powerful “earner” within the drug development market — being only prescribed for one week, or a limited time period, which, when compared to “lifestyle drugs” such a beta-blockers that must be taken for the rest of the patient’s life — this weakens the development of antibiotics by the Pharma. In fact, the Pharma and drug companies in general have shown little interest in developing new antibiotics despite the growing problem of resistant strains of infectious bacteria and particularly MRSA infections in hospitals. This is undoubtedly the best evidence that the Pharma are not necessarily guided by good science, but rather by profit for shareholders.

Another problem associated with the Pharma is with regard to the release of information about the drugs these companies sell. As reported in the June 2014 edition of Scientific American, even the doctors that prescribe the drugs rarely have information about side-effects (other than the sanitized details released with the packaging). However, this highly protective, negative attitude toward the release of scientific information (which is an antithesis to most researcher workers) may be about to change as the drug companies are forced to accept that drug trials, which are often funded by public money, should be made available in the public domain. European Law already requires this form of release and now large Pharma, such as Pfizer, have set up Web portals to release such data files. Negative information can often be the most important and yet rarely gets published, so this type of release and sharing of information can only be for the good. The Pharma should be required to always share this data as part of a more open approach to better collaboration in science, but only governments can ensure this happens.

So, what does the future hold? Well, I found it very interesting that the government was willing to try to become involved with the Pfizer take over bid, even though this was not in support of the science and it is important to remember that all of the Pharma have developed drugs and other products on the back of government-funded science (usually at universities, but also through small spin-off companies). Why do we not, as a country, take the next logical step in this process and forge much tighter links between universities and the Pharma — not in the usual way of depending upon academics to do this, but at a national level with government funding and tax incentives — and look to drive development of commercially relevant science through blue sky projects in areas that can be more reactive to actual needs (most researchers are the first to react to problem situations as they often write research papers explaining the problem). It will not cost the government any more than it currently spends, but by forcing the Pharma into tighter interactions and promoting such activities we could build a better research base. The government could then better control the output of the industry and request suitable drugs (and antibiotics) from this research. Such a system would undoubtedly have unexpected “spin-offs”, especially in the area of technological developments, which would greatly spur this type of research. Patents would still protect discoveries and profits will still be made as before.

Ah well, I can dream!

Stopping Cancer


Cancer is produced when a cell becomes immortal (it will no longer die in the normal way) and grows without control in an undifferentiated manner (it does not develop into the normal cell type e.g. kidney, liver, lung etc.). There are many triggers for this situation, some are genetic and some are environmental. However, one interesting viewpoint is that the cell no longer behaves normally and should be recognized as different by the host immune system and destroyed, as with viral infections. Unfortunately, the host immune system has many controls in place to prevent such action against host (self) cells.

In this blog 1 describe a new method for preventing the growth of cancer that uses the body’s own immune system to attack the tumour. To achieve this, key controlling elements are inactivated by binding artificially-produced antibodies to them – these antibodies are produced separately outside of the patient’s immune system and are a replacement for conventional drugs that might achieve a similar result – by switching off these key control elements the patient’s immune system can destroy the tumour cell.

Main text:

In a simple world, cancer would not exist as the body’s immune system should recognize the tumour as “foreign” and destroy it. However, this does not occur and, in addition, there is a danger in having such a response as the tumour cell is not really “foreign” in the way an infectious agent such as a virus is, but is in fact “self”. An immune response to “self” cells is known as an autoimmune response and it is very dangerous — autoimmune diseases are usually lethal (for example AIDS). Therefore, the human body has a series of controls in place designed to prevent such an autoimmune response. This situation is complicated further by the fact that some cancers actively interfere with the immune system to prevent any such attack against them.

Front_Imm_Rev_figs_finalOne protein that acts to slow an immune response is CTLA-4 and the important job it does in controlling the immune system and preventing an autoimmune response is clearly seen in mice that have been genetically engineered to remove the CLTA-4 protein — they die in weeks as their own immune system attacks and destroys the mouse’s organs. One concept for “stopping cancer” is that by blocking CLTA-4, this would lead to a vigorous attack, by the immune system, of the tumour and, indeed, this was found to happen. In addition, there is another control system, PD-1, a protein found on the surface of T-cells (an important component of the immune system), which when bound by an external protein will force the T-cell to destroy itself, which in turn slows the immune response. Unfortunately, certain cancer cells have developed surface proteins that elucidate this response from T-cells, by binding PD-1, and consequently triggering an early destruction of T-cells that might otherwise have been targeted against the tumour. If PD-1 could be blocked, the tumour cells could no longer trigger T-cell destruction and the T-cells would, instead, target destruction of the tumour — this provides an ideal, targeted treatment against the cancer. Therefore, the real question of how to “stop cancer” is how to block the function of CLTA-4 and PD-1 in a very accurate way without stressing the body (by avoiding chemical-based drugs) and thus use the body’s own defence system to stop the cancer.

Normally, when developing drugs for the treatment of various illnesses the idea is to target a drug against an important region of a protein, in such a way that the normal function of the protein cannot be achieved. A relatively new technology has, in recent years, been used to block protein function in this way and this technology involves the use of antibodies. To understand how this technology works it is necessary to understand the structure of an antibody.


Representation of an antibody showing the v region responsible for binding the antigen. The constant region (c) identifies the antibody as human, mouse etc. and only varies between species.

The important capability of antibodies, that makes possible this blocking function, is their ability to bind very tightly to their antigen. The exactness of binding by an antibody, to its target protein, is a key part of the natural immune response. The immune system has a very effective feedback-amplification mechanism, which allows a specific antibody, that tightly binds a specific antigen, to be recognized and mass produced, these antibodies will only bind to the antigen that they first recognized and can make the immune system react against cells containing that antigen — where such cells are infected by a virus that produces the antigen the immune system can destroy these infected cells. The human immune system is one of the most advanced protective systems in nature and provides both an early response and a long acting immunity to infection, which is one of the reasons we live as long as we do!

Polyclonal antibodies bind to different sites on the same antigens (A). These are easily isolated from blood following infection, or following injection of an antigen, and although the antibodies will bind very specifically to a specific antigen, they bind to that antigen at a wide variety of different sites — one antibody binding one site, while another antibody binds a different site on the same antigen.

Polyclonal antibodies bind to different sites on the same antigens (A). These are easily isolated from blood following infection, or following injection of an antigen, and although the antibodies will bind very specifically to a specific antigen, they bind to that antigen at a wide variety of different sites — one antibody binding one site, while another antibody binds a different site on the same antigen.

However, such antibodies, known as polyclonal antibodies, bind to various regions (epitopes) on the surface of the antigen, which means that they cannot be used to block the SAME specific site on the protein in the way a drug is designed to do and this means that the antibody cannot be used to block that normal actions of CLTA-4 and PD-1 as required. During the 1990s all of this changed when monoclonal antibodies were first isolated, initially from mice — these bind to a specific, single site (epitope) on a protein and, consequently, blocking of protein/enzyme function using a monoclonal antibody became possible. Humanization of monoclonal antibodies, in which the constant region (Fc) is replaced by that from a human antibody, has allowed selective targeting of proteins exposed on the surface of human cells, or specific enzymes that are active within human cells. This novel approach to attacking cancer does not require drug development, but uses monoclonal antibodies that have been produced in a laboratory and are targeted against CLTA-4 and PD-1.

A number of companies have developed these antibodies and trials of this mechanism for controlling cancer is progressing well; although, there are still problems associated with inflammation and local reactions to the antibodies, but anti-inflammatory drugs help control this situation and the reaction are much less severe and short-lived.

pd1-pathway-3530d0c09a9545dfb0d7529bafec7648A recent (2007) study of the use of monoclonal antibodies to both of these antigens (CLTA-4 and PD-1) in 53 patients has shown more than 50% responded with tumour shrinkage and now >900 melanoma patients are being treated in a more extensive study and the results already look very promising.

Such novel treatments will ease the need for chemical-drug design and development and could provide a more focussed way to treat cancer without the need to use radiotherapy or chemotherapy. Drugs based on the use of antibodies are already making significant inroads into the treatment of a wide range of diseases (e.g. Ebola Infections) and, as we begin to understand biological systems better, more breakthroughs will be possible in this area. What is exciting about this research is that they are targeted through PD-1 binding to specific tumour cells, they use the patient’s own immune system to combat the cancer without drug treatment and, as we understand the biology of the immune system better, there are likely to be more targets for antibody binding. As we begin to better understand the way the immune system works, more such treatment will become possible allowing a multi-target approach to stop cancer.

GM Plants with a different storyline

chestnutAnother Scientific American article (William Powell, March 2014, pp54-57) inspired me to write this post, which details a story of genetic engineering a plant, but not a crop, but a tree!  First, a quick background of why….

Apparently, in 1876 an unfortunate situation developed following importation of chestnut seeds from Japan, it turned out that these seeds were contaminated with spores from a fungus (Cryphonectria parasitica) to which American Chestnuts were highly sensitive, but the Japanese chestnut1Chestnuts were immune.  This fungus effectively strangles the tree through growth of its mycelial fans, which produce oxalic acid that destroys the bark of the tree while allowing growth of the fungus.  It is this dead wood, produced by the action of the oxalic acid, that leads to strangulation of the tree as it tightens its grip on the trunk of the tree.  only 50 years after the initial import of this deadly fungus more than 3 billion trees were dead!

A programme of research was initiated to produce hybrid trees by crossing Chinese variants, which are also resistant to the fungus, with American trees to produce a hardy hybrid, but this work will take many years.  Therefore, in parallel a project was initiated to make use of, what at the time was a novel approach, genetic engineering of the plant.  As is often the case in science this idea was built around a fortunate coincidence in which a group had isolated a wheat gene for oxalate oxidase, and introduced this gene into other plants using a well described engineering system called Agrobacterium.  This enzyme was, of course, ideal for the proposed project as it breaks down oxalic acid the primary cause of the blight.  In addition, they had available genes that produce antimicrobial peptides (AMPs) that disrupt C. parasitica infection and, as time passed, genome sequencing projects have pointed to the genes in Chinese Chestnut trees that are responsible for resistance to the fungus.  The future looks promising for genetically engineering the tree instead of depending upon hybrids.

Ti PlasmidThe use of the soil bacterium Agrobacterium tumefaciens is an interesting story in itself and a subject I enjoyed teaching about as a perfect example of using a natural system for advanced technology.  The odd thing about this bacterium is that is has the ability to infect plants with its own DNA that makes the plant produce unusual amino acids, which it cannot synthesise itself.  the result of this infection of foreign DNA is that the plant develops small tumours, but the bacterium benefits from the availability of these biomolecules.  Genetic engineers were able to manipulate this system so that they could insert “foreign” DNA into the bacterial plasmid, in place of the tumour-forming components, and enable the bacterium to transfer this foreign DNA into a wide variety of plants in a stable a predictable manner.  Eventually, the research group were able to develop the mechanisms for tissue culture of the genetically altered plant cells and a model system based on poplar trees was available to initiate the experimental approach to overcoming the blight infection.

There are now more than 1,000 transgenic Chestnut trees growing in field sites, public acceptance of this approach to restoring a small piece of biodiversity is good and the future holds a promising approach for further such experimentation.  My own view is that this is a piece of genetic engineering that all sounds very good and very promising for the future.  My only caution, also expressed by the researchers is the spread of the genetically modified seeds, which may help remaining trees recover from infection, may also lead to cross pollination with closely related plants.  However, there are few trees closely related to the American Chestnut, so this seems unlikely.   A good story that supports genetic engineering in plants!


A model organism – the virtual bacteria.

stanfordmycoplasmagenitaliumI was reading an article in Scientific American today that got me thinking about the complexities of biology – the article described the production of a virtual bacteria using computing to model all of the known functions of a simple single cell.  The article was a very compelling read and presented a rational argument of how this could be achieved, based on modelling of a single bacterium that would eventually divide.  The benefits of a successful approach to achieving this situation are immense for both healthcare and drug development, but the difficulties are equally immense.

In order to simplify the problem the organism chosen to be modelled is a bacterium with the smallest genome size – Mycoplasma genitalium a bacteria that has only 500+ genes all of which have been sequenced.  This bacterium is also medically important, which adds weight to the usefulness of a virtual organism.  The problem of programming the computer was divided into modules, each of which will describe a key process of the cell all of which could feedback in a way that described actual binding coefficients for protein-protein and protein-DNA interactions.

As I read the article, I began to realise that there were some simple problems associated with the description of how the computer would “manage” the cell and when the author described doubling protein concentrations prior to cell division I knew there were problems with their model – this simplistic approach is not what happens in the cell – cellular control is an important aspect of this modelling and must be correct if the cell is to be realistic.  I can illustrate what I mean with one example – plasmid replication.  A plasmid in an autonomous circle of DNA, often found in bacteria, that are easy to understand and ubiquitous in nature.

Replication of plasmid DNA:

ReplicationThe number of copies of a plasmid are tightly controlled in a bacterial cell and this control is usually governed by an upper limit to an encoded RNA or protein, when the concentration of this product drops, such as AFTER division of the cell, replication will occur and the number of plasmids will increase until the correct plasmid number is attained (indicated by the concentration of the controlling gene product).

This is a classic example of cellular control and is a lot different to a model based on doubling protein concentration in anticipation of cellular division.

Mycoplasma genetics and the complexity of biology:

This whole problem also got me thinking about my own subject area and my brief excursion into the study of restriction enzymes from Mycoplasma. Restriction systems are a means for bacteria to protect themselves from viral invasion and, despite the small size of the genomes of Mycoplasma bacterial strains, they encode such systems.  There is clear evidence that such systems are “selfish” and may be they are fundamental to the long term survival of bacteria, so I think they need to be dealt with in the model organism.  However, things begin to get complicated when you look at the well described system from Mycoplasma pulmonis (a slightly more complex version of the organism used for the model).  Instead of a single set of genes for a restriction system, as usually found in other organism, the restriction genes of Mycoplasma pulmonis  are capable of switching in a way that can generate four different proteins from a single gene.  This is where the complexity of modelling an organism occurs and while the organism used may have a simple genome, it is important to know of how even simple organisms can increase their genetic complexity without increasing their DNA content.


I think the work at Stanford is both interesting and important and I think they have achieved a very important step along the road to modelling a living cell, but I also think they may need more information and have more complex modules available to them as they try to be more accurate with even these simple organisms.  It will be a long road before we have a model of a human cell, but what an incredible thought that would be!


Proteins, Peptides and Amyloids – Alzheimer’s Disease

if you have read my recent science blogs you will be aware that I have an interest in Alzheimer’s Disease based on work involving protein aggregation.  A recent article by Bhattacharjee and Bhattacharyya (Journal of Biological Chemistry, 2013. 288(42): p. 30559-30570) brought back a result obtained in my lab many years ago and got me thinking about how small peptides can affect protein aggregation.

So, first the unexpected result from many years ago:

Marker proteinAt the time we were studying a small peptide called Stp, which was able to switch off a complex restriction and modification enzyme called EcoprrI, but the researcher carrying out this work made an unexpected observation that Stp peptide, when added to a group of proteins of different sizes (used as size markers) altered their apparent size and even aggregated many of the proteins (you can see this aggregation in the wells at the top of the gel).  This peptide was found to be able to inhibit certain protein-protein interactions (later we realised that is how it prevented the restriction enzyme from working), but clearly it could also affect the behaviour of other proteins in a gel.  The effect was primarily aggregation, but the result made me think at the time that maybe amphipathic peptides might also influence, even disrupt, protein-protein interactions.  We had just observed that the EcoR124I restriction enzyme could dissociate as a means of controlling function and I wondered if Stp would enhance that dissociation – low and behold Stp did indeed disrupt the subunit assembly of EcoR124I, and EcoprrI; we had demonstrated how the anti-restriction activity of this small peptide worked.

And so, secondly, to the recent observations with amyloidosis:

AmyloidAlzheimer’s Disease is initiated by protein aggregation when β-amyloid (Aβ) peptide oligomerisation into fibril structures that eventually form plaques within the brain.  Disruption of these aggregates would be a very important treatment for Alzheimer’s and is an area of intensive research.  What Bhattacharjee and Bhattacharyya have shown is that a small peptide, found in Russell’s viper venom, not only destabilise the Amyloids, but is also stable in blood for up to 24 hours.  This is a very interesting and promising observation that should stimulate the study of the effect of peptides on protein-protein interactions and perhaps lead to a non-toxic version of the peptide that could be used to treat Alzheimer’s.

Sometimes, it is very interesting how one piece of science can stimulate interest in another, as illustrated above, but also shows how diverse areas of research can sometimes be linked.  Great ideas are not always the result of hard work, but more often arise from interactions between different researchers – keep collaborating people!

Update – Nov. 2015:

In the latest issue of Scientific American under the heading “Advances” they report an article in Nature from work at UCL where autopsies of several patients who died from CJD (the human version of “mad cow disease”, which they acquired from infected growth hormone treatment) where they found evidence of amyloid formation associated with Alzheimer’s – at too early an age for natural onset.  Further work suggests that amyloid precursors, or small clumps of the beta-amyloid, may act in the same manner as prions do in the onset of CJD and lead to Alzheimer’s disease.  It would seem to me that the time is now ripe to begin a serious study of protein miss-folding, aggregation and conformational changes that may trigger these disorders.

Synthetic Biology – will it work?

Eng_Future_Logo_OutlinesEvery now and then science comes up with a new approach to research that impacts on technology, but often these approaches are controversial and the headlines we see are far from the truth and can damage the investment into the new techniques.  One good example is the Genetic Modification of plants and the production of GM-foods, which has a really bad press in Europe despite many obvious benefits for the world economy and for disease control.  The latest technology, which follows from the explosion in genetic engineering techniques during the 1990s, builds on concepts developed in bionanotechnology and is known as Synthetic Biology.  But, what is Synthetic Biology?  Will it work?  And what are the dangers versus benefits of these developments?  Gardner and Hawkins (2013) have written a recent review about this subject, which made me think a blog on the subject was overdue.

My background in this area is two-fold:

  1. I was a part of a European Road-Mapping exercise, TESSY, that produced a description of what Synthetic Biology is and how it should be implemented/funded in Europe.
  2. I was also Project Coordinator for a European research project – BioNano Switch, funded by a scheme to support developments in Synthetic Biology, that aimed to produce a biosensor using an approach embedded in the concepts of Synthetic Biology.

So, what is Synthetic Biology?  I think the definition of this area of research needs to be clearly presented, something that was an important part of the TESSY project, as the term has become associated simply with the production of an artificial cell.  However, that is only one small aspect of the technology and the definition TESSY suggested is much broader:

Synthetic Biology aims to engineer and study biological systems that do not exist as such in nature, and use this approach for:

  • achieving better understanding of life processes,
  • generating and assembling functional modular components,
  • developing novel applications or processes.

syntheticBiologyThis is quite a wide definition and is best illustrated with a simple comparison – in electronic engineering there exists a blueprint (circuit diagram) that shows how components (resistors, capacitors etc.) can be fitted together in a guaranteed order to produce a guaranteed result (a device such as an amplifier).  The Synthetic Biology concept would be to have a collection of such components (DNA parts that include promoters, terminators, genes and control elements; cellular systems including artificial cells and genetically engineered bacteria capable of controlled gene expression; interfaces that can connect biological systems to the human interface for useful output).  This would mimic the electronic situation and provide a rapid mechanism for assembly of biological parts into useful devices in a reliable and predictable manner.  There are many examples of such concepts, but the best known is the Biobricks Foundadtion.  However, at the TESSY meeting I was keen to make it clear that there are fundamental problems with this concept, so what are the problems?

At its most simple concepts a Biobricks database would consists of a number of different types of DNA (promoters, are short DNA sequences that switch a gene on; terminators, are short DNA sequences that switch a gene off; control elements, are DNA sequences that control the promoter switching on or off a gene as required; genes, would be DNA sequences that produce Recombinant DNAbiotechnologically useful products; and cells, are the final package that enables the DNA to do its work and produce the required product), which sounds logical and quite simple.  However, biological systems are not as reliable as electronic systems and combinations of promoters and genes do not always work.  One of the major problems with protein production, using such artificial recombinant systems, is protein aggregation resulting in insoluble proteins that are non-functional.  In addition, there are many examples (usually unpublished) of combinations of Biobricks that do not work as expected, or if used in a different order also result in protein aggregation, none of which ever happens with electronic components.  The reasons are far from clear, but are closely related to the complexity of proteins and the need for them to operate in an aqueous environment.  My thoughts about how to deal with this situation is to have a large amount of metadata associated with any database of Biobricks, which includes information about failures or problems of protein production from specific combinations.  However, I am not aware of any such approach!

Synthetic CellThere are other aspects of Synthetic Biology that do not depend on Biobricks and one example is the artificial cell.  The ideal for such a system is a self-assembling package, capable of entrapping DNA, capable of replication and survival and able to produce useful biomaterials and significant steps have been made toward such a system.  However, one area of concern as such systems are developed, is containment – can we really be sure these artificial microbes will remain in a contained environment and not escape to interact with and possible change the natural bacterial population.  However, the power and capability of such a system should not be underestimated and the likely use in future medicine could be immense – simple examples would be as delivery systems for biomaterial that can activate cellular changes by targeting to the required cell and then switching on protein production (e.g. hormones).  This type of targeted medicine would be a major breakthrough during the later part of this century.

SEN25_BIO11Another type of Synthetic Biology involves the artificial assembly (possible self assembly) of biomaterials onto an artificial surface in an way that is unlikely to occur naturally, but provides a useful device – I see this as more like what a Biobricks project should be like – such a system is usually modular in nature and the bio-material would normally be produced using recombinant techniques.  The research project I mentioned earlier involved such a device and the outcome was a single molecule biosensor for detecting drug-target interactions at the limits of sensitivity.  The major issues we had with developing this device was the precise and accurate attachment of biomaterial, to a surface in such a way that they function normally.  However, overall the project was successful and shows that a Synthetic Biology approach has merits.

What are the benefits that Synthetic Biology can provide society?  Well, one advantage is a more systematic approach to biotechnology, which to date has tended to move forward at the whim of researchers in Academia or industry.  Assuming the problems, associated with protein production, mentioned above can be better understood then there could be a major boost in use of proteins for biotechnology.  In addition, Synthetic Biology techniques offer a unique opportunity for miniaturisation and mass production of biosensors that could massively improve medical diagnosis.  Finally, artificial cells have many future applications in medicine, if they can be produced in a reliable way and made to work as expected:

  1. They could provide insulin for diabetics.
  2. Be made to generate stem cell, which could be used in diseases such as Alzheimer’s and Huntingdon’s.
  3. They could deliver specific proteins, drugs and hormones to target locations.
  4. They could treat diseases that result from faulty enzyme production (e.g. Phenylketonuria).
  5. They could even be used to remove cholesterol from the blood stream.

However, there are always drawbacks and risks associated with any new scientific advance:switch%20off

  1. Containment of any artificial organism is the most obvious, but this enhanced by the possibility of using the organism to produce toxins that would allow its use as a biological weapon.
  2. The ability to follow a simple “circuit diagram” for protein production, combined with a readily available database of biological material, could enable a terrorist to design a lethal and unpredictable weapon much more complex and perhaps targeted than anything known to date.
  3. Inhibit research through a readily available collection of materials that prevent patent protection of inventions.  This could be complicated by the infringement of patents by foreign powers in a way that blocks conventional research investment.
  4. Problems associated with the introduction of novel nano-sized materials into the human body, including artificial cells, which may be toxic in the long term.

My own feeling is that we must provide rigorous containment and controls (many of which already exist), but allow Synthetic Biology to develop,  Perhaps there should be a review of the situation by the end of this decade, but I hope that the risks do not materialise and that society can benefit from this work.