GM Plants with a different storyline

chestnutAnother Scientific American article (William Powell, March 2014, pp54-57) inspired me to write this post, which details a story of genetic engineering a plant, but not a crop, but a tree!  First, a quick background of why….

Apparently, in 1876 an unfortunate situation developed following importation of chestnut seeds from Japan, it turned out that these seeds were contaminated with spores from a fungus (Cryphonectria parasitica) to which American Chestnuts were highly sensitive, but the Japanese chestnut1Chestnuts were immune.  This fungus effectively strangles the tree through growth of its mycelial fans, which produce oxalic acid that destroys the bark of the tree while allowing growth of the fungus.  It is this dead wood, produced by the action of the oxalic acid, that leads to strangulation of the tree as it tightens its grip on the trunk of the tree.  only 50 years after the initial import of this deadly fungus more than 3 billion trees were dead!

A programme of research was initiated to produce hybrid trees by crossing Chinese variants, which are also resistant to the fungus, with American trees to produce a hardy hybrid, but this work will take many years.  Therefore, in parallel a project was initiated to make use of, what at the time was a novel approach, genetic engineering of the plant.  As is often the case in science this idea was built around a fortunate coincidence in which a group had isolated a wheat gene for oxalate oxidase, and introduced this gene into other plants using a well described engineering system called Agrobacterium.  This enzyme was, of course, ideal for the proposed project as it breaks down oxalic acid the primary cause of the blight.  In addition, they had available genes that produce antimicrobial peptides (AMPs) that disrupt C. parasitica infection and, as time passed, genome sequencing projects have pointed to the genes in Chinese Chestnut trees that are responsible for resistance to the fungus.  The future looks promising for genetically engineering the tree instead of depending upon hybrids.

Ti PlasmidThe use of the soil bacterium Agrobacterium tumefaciens is an interesting story in itself and a subject I enjoyed teaching about as a perfect example of using a natural system for advanced technology.  The odd thing about this bacterium is that is has the ability to infect plants with its own DNA that makes the plant produce unusual amino acids, which it cannot synthesise itself.  the result of this infection of foreign DNA is that the plant develops small tumours, but the bacterium benefits from the availability of these biomolecules.  Genetic engineers were able to manipulate this system so that they could insert “foreign” DNA into the bacterial plasmid, in place of the tumour-forming components, and enable the bacterium to transfer this foreign DNA into a wide variety of plants in a stable a predictable manner.  Eventually, the research group were able to develop the mechanisms for tissue culture of the genetically altered plant cells and a model system based on poplar trees was available to initiate the experimental approach to overcoming the blight infection.

There are now more than 1,000 transgenic Chestnut trees growing in field sites, public acceptance of this approach to restoring a small piece of biodiversity is good and the future holds a promising approach for further such experimentation.  My own view is that this is a piece of genetic engineering that all sounds very good and very promising for the future.  My only caution, also expressed by the researchers is the spread of the genetically modified seeds, which may help remaining trees recover from infection, may also lead to cross pollination with closely related plants.  However, there are few trees closely related to the American Chestnut, so this seems unlikely.   A good story that supports genetic engineering in plants!

 

Using Sky Broadband – some interesting problems

Sagem router

41SjWSLJnGL__SX300_When we moved to our new house we were obliged to use Sky as there is no cable supplied on the estate and the house is wired for a dish supply.  This worried me as I have never used Sky before and I was unsure about broadband, service support and the eventual ownership of the hardware.  As soon as Sky was installed I became aware of an immediate problem – we have a patio that sits above the dining room, on a concrete support, which reduces our mobile phone signal to almost zero and almost wiped out the WiFi signal from the router.  However, despite my concerns about aftersales service, I was, at the time, able to contact the Helpdesk directly by phone and they were very helpful.  They provided, free of charge, a wireless extender, but this did not overcome the problem and, as a consequence, I decided that my best solution to this problem was a TP-Link powerline system that transmits the internet connection via the mains system in the house.  The setup purchased consists of a link to the router, a wired link that allows me to connect my smart TV to the internet and a wireless/wired link that allows me to connect the Sky box in the dining room, while also providing the missing wireless signal in the same room.  This solution works really well and, by using the same SSID for the TP-Link as provided by my Sky router, my laptop connects automatically as I move around the house.  The system seems very stable and has only been switched off a couple of times over the last year.

So one unusual problem solved, but the take home message for me was that Sky provided really good support at the time and did all they could to resolve a problem that was not of their making.  However, this telephone help service was withdrawn and things are less good now……

Some while ago I posted a message, on the Sky Community, about a problem with the SAGEM router supplied with my Sky broadband.  The problem I had was that I could not get my HP inkjet network printer to connect to the router.  This is an annoying situation as one reason for purchasing the printer was that I could access the printer anywhere within the house, using a laptop, without having to have the main desktop computer switched on – the printer should work after plugging a network cable into the printer and the router.  This worked fine when I was with Virgin, but, as detailed in my Sky Community post, not with the sky Router.  In fact the router delivered a network address that simply “said” the device was local, but did not register the device on the network.

On the Community Pages, I was given a few pointers of things to try (which you can read in the link above), but nothing worked and a bit more reading outside of the Sky Community led me to conclude that the router is locked down using something called MER – MAC Encapsulated Routing – which allows Sky (in this case) to disable certain features and requirements and allows them to “support” their broadband service in a predictable way.  While I understand why sky are doing this, I was very disappointed by the lack of support for my problem, from the Sky Community, who eventually simply archived my post without ever addressing the issue.  To me, to lock the router down such that a printer cannot be attached, is an excessive restriction on use of the machine, which should be declared up front – especially since the SAGEM has available network connections suggesting this is possible.  However, I have overcome this situation by installing my own router, which was easier than I thought; although, it does infringe the Terms and Conditions of the Sky package, but I understand they do not enforce these conditions in reality.  I was worried that installing my own router would be a difficult process, but I was pleasantly surprised about how easy this was, once I had gathered the required information.

The first stage of this process was to get the Username and Password for the router, which is different from the supplied SSID and password that Sky provide and is not available from Sky.  In fact discovering this information was the most difficult step and information around the Internet is fragmented at best.  So, here is what I hope is a simple explanation of what to do:

The first step is to obtain the LAN MAC address for the modem and identify the make and model of the router supplied by Sky – in my case this was a SAGEM F@ST2504 and the MAC address is available from the routers status page as detailed on the video HERE.  If you have never accessed Router Status Page then watch this video first and take your time to have a look at the Router setup and what information is available where (Type http://192.168.0.1/ into your web browser and the Username is admin and the Password is sky – N.B. this is not the username and password required to setup your own router!).  This information is used to generate a Username and Password at https://www.cm9.net/skypass/.  Once you have this information, setting up your own router is easy.

TD-W8970-1_0-02Choosing a suitable router is not as easy as I thought as I wanted a system that would readily accept the username and password, but I was not sure whether all routers would do this – I now think that they would once you choose the required protocol for the new router, which is PPPOA.  However, this slight problem was easily solved for me when I discovered someone was recommending the TP-Link routers as a replacement for the SAGEM, so I bought a TP-Link TD-W8970 V3 from Amazon (it is also available from WH Smith).

In summary:

  1. Obtain the model of the current Router and its LAN MAC address from the Router Status Page at http://192.168.0.1 – Username admin Password sky.
  2. Use this information to generate the required Router Username and Password at https://www.cm9.net/skypass/.
  3. Connect the new router to your computer and access the setup page as detailed in their instruction.
  4. Choose PPPOA as the protocol, enter the generated username and password when requested.  Change the default SSID of the new router to that of the SAGEM (usually SKY*****) and enter the WPS security password (as provided on a small business card by Sky).
  5. Plug in the ADSL connection used to connect the SAGEM  router and off you go!

Mine worked immediately; although I did have to register the computers on a Home Network.  I will update if I have any problems, but what is important is that I now have my printer networked as originally required.  I am now looking at whether I can attach a USB drive to the router and use this as a NAS Media Server.

A model organism – the virtual bacteria.

stanfordmycoplasmagenitaliumI was reading an article in Scientific American today that got me thinking about the complexities of biology – the article described the production of a virtual bacteria using computing to model all of the known functions of a simple single cell.  The article was a very compelling read and presented a rational argument of how this could be achieved, based on modelling of a single bacterium that would eventually divide.  The benefits of a successful approach to achieving this situation are immense for both healthcare and drug development, but the difficulties are equally immense.

In order to simplify the problem the organism chosen to be modelled is a bacterium with the smallest genome size – Mycoplasma genitalium a bacteria that has only 500+ genes all of which have been sequenced.  This bacterium is also medically important, which adds weight to the usefulness of a virtual organism.  The problem of programming the computer was divided into modules, each of which will describe a key process of the cell all of which could feedback in a way that described actual binding coefficients for protein-protein and protein-DNA interactions.

As I read the article, I began to realise that there were some simple problems associated with the description of how the computer would “manage” the cell and when the author described doubling protein concentrations prior to cell division I knew there were problems with their model – this simplistic approach is not what happens in the cell – cellular control is an important aspect of this modelling and must be correct if the cell is to be realistic.  I can illustrate what I mean with one example – plasmid replication.  A plasmid in an autonomous circle of DNA, often found in bacteria, that are easy to understand and ubiquitous in nature.

Replication of plasmid DNA:

ReplicationThe number of copies of a plasmid are tightly controlled in a bacterial cell and this control is usually governed by an upper limit to an encoded RNA or protein, when the concentration of this product drops, such as AFTER division of the cell, replication will occur and the number of plasmids will increase until the correct plasmid number is attained (indicated by the concentration of the controlling gene product).

This is a classic example of cellular control and is a lot different to a model based on doubling protein concentration in anticipation of cellular division.

Mycoplasma genetics and the complexity of biology:

This whole problem also got me thinking about my own subject area and my brief excursion into the study of restriction enzymes from Mycoplasma. Restriction systems are a means for bacteria to protect themselves from viral invasion and, despite the small size of the genomes of Mycoplasma bacterial strains, they encode such systems.  There is clear evidence that such systems are “selfish” and may be they are fundamental to the long term survival of bacteria, so I think they need to be dealt with in the model organism.  However, things begin to get complicated when you look at the well described system from Mycoplasma pulmonis (a slightly more complex version of the organism used for the model).  Instead of a single set of genes for a restriction system, as usually found in other organism, the restriction genes of Mycoplasma pulmonis  are capable of switching in a way that can generate four different proteins from a single gene.  This is where the complexity of modelling an organism occurs and while the organism used may have a simple genome, it is important to know of how even simple organisms can increase their genetic complexity without increasing their DNA content.

Conclusion:

I think the work at Stanford is both interesting and important and I think they have achieved a very important step along the road to modelling a living cell, but I also think they may need more information and have more complex modules available to them as they try to be more accurate with even these simple organisms.  It will be a long road before we have a model of a human cell, but what an incredible thought that would be!

 

Who owns science

scienceYou might think that this is a simple question with a simple answer, but the truth is far from simple and this subject led to a very lengthy debate the last time I raised it with my colleagues.   However, before we get to the subject a simple definition or two are needed:

  • Science – what I mean by science is “experimental-based discovery science” of the type that is carried out in Universities.  I do not mean industry-funded science, or research that involves review of a subject.  Pure research, carried out for the sake of interest is often known as “blue-skies” research, but it can often lead to unexpected commercial outcomes.
  • Ownership of science – by own I really mean how accessible is scientific information as it is access to the science that defines ownership.  I hope this will become more clear as I develop this blog!

I guess before we get to ownership of science it is important to first explain how research is funded in the UK and how it is carried out in Universities:

Research Funding:

Science-funding-graphic-007

There are a large number of funding sources available both in the UK and across Europe, of which the research councils and European research grants are the largest funders, but significant funding also comes from charities and from private sources.  All of these types of funding are competitive and awarded to individual scientists, or groups of scientists who collaborate toward an overall goal.

Universities also receive direct funding of research from government, through the research councils, in the form of infrastructure awards (often based on how many research grants were awarded, but also on measured success of individual researchers).  Sometimes, this funding is targeted at commercially-orientated research and sometimes at “blue-skies” research.  In addition, there are various sources of infrastructure funding, to which universities can bid in a competitive way, in order to establish equipment or resources for research.

Finally, individual researchers may have access to funds that allow small research projects to be initiated, that are either university-based or belong to the individual within the university’s research framework (overheads and slush funds).

Establishing a research project:

Any full-time employee at a university can apply for a research grant and carry out research; although, to get a competitive grant the individual usually needs an established research profile.  However, it is often a surprise to those outside of the university system that carrying out research is not a contractual requirement for a university employee, but simply something that is often expected or desired by the employer.  So, academics do not have to apply for research grants and are not forced to do so – research usually springs from their own interests – and many academics only carry out teaching duties.

Those that want to engage in a research project have two ways to start:

  1. Join an existing research group and follow their own path within that research group.
  2. Establish a new research group, seek external funding and hope to gain sufficient expertise to follow the first funding with further funding – often a difficult pathway.

Researcher%20reading%20books%20about%20Biology%20The%20Arts%20MIt is generally accepted that the chances of obtaining funding from most sources is at best 1:5, so it may well take five applications to get one grant, but sometimes this process also means changing the details in the grant application and also looking to a different funding sources – becoming an established researcher is not easy and may require many hours of reading and writing!  Sometimes support comes from the university in the way of PhD studentships, which lend a pair of hands to the process of obtaining enough results to add weight to an initial application.  In addition, some funding sources include grants aimed at new researchers 9often young scientists at the beginning of their career).

Measuring Research Ability:

In this modern era, where every work-based activity is monitored for efficiency, science is no different and grants are only awarded to researchers who have a strong rating in what is known as the Research Excellence Framework (REF – previously known as, the Research Assessment Exercise or RAE).  The award of externally-funded grants is a major part of this exercise, but the other major measurement of research excellence is publishing in peer-reviewed journals and this now brings us to our main subject as this is the first measure of access to science.  Without easy access to published research it is impossible to write a successful research grant proposal.

research-impact-cartoon

It should be clear from what I have already said that access to published science is the start point for writing any grant application.  Strangely, despite the fact that the researcher carries out the research, he does not necessary have access to even his own published work.  This is a quirk from using publishing houses to print and distribute published science, but is also a trap created by the REF exercise where a main requirement is to  publish in high-impact journals to improve the REF-rating.  However, these high-impact journals are usually owned by the major publishing houses and the general method for publication means that copyright lies with the publishing houses.  This problem of access to published science is compounded by the fact that the publishing houses restrict access to published papers unless you subscribe, in one form or another, to the journal!  Recently, there has been a strong movement amongst scientists to change the way science is published, but this is still a problem area.  Some grants include sufficient funds to pay for “open-access” research papers, but many do not.  In fact, a good illustration of this problem is how difficult it is for the general public to access published science – without a library subscription to a number of journals, the cost would be prohibitive.

In summary, even the best scientists do not have immediate access to their own published work, at best they depend on their university library to purchase journals that enable such access and as such they do NOT own their own research!  If they have used grant funds to publish in an open access journal then they will be able to read and access that paper, along with anyone else in the world, and therefore they will have “bought” ownership of their research.

Invention, Patents and Ownership:Law

Of course, publishing research papers is only one aspect of research, but it is the primary means by which research is disseminated and as a consequence a very important aspect of science.  However, some research leads to invention and under the British patent system that is the primary mechanism for obtaining a patent.  A patent allows release of details of the science, publication to the general public, but uses the “strong arm of the law” to prevent the work being copied, allowing the inventor the right to commercial exploitation.  If the research looks as though it may lead to commercialisation then a patent is a very important aspect for protecting the work and the ideas.  Often a university will pay the costs of patent applications, but who owns the patent and who owns the research that led to it?  There is no doubt that the researcher’s input is an absolute requirement for the invention, but university employees (in the UK) are subject to a clause in their contract that states that any invention, arising from research at the university, will belong to the university!  One argument that is used to support this situation is that without the university’s infrastructure (especially high cost equipment) the invention might not be possible.  So, a patent may have the inventor’s name on the front, but ownership is the university’s.

sif_chemists_28feb06So, who really owns science?  Well, it doesn’t really seem that it is the researcher (in many cases they cannot even guarantee that they can read their own papers); although, there are often benefits that come the way of the researcher that make the work worthwhile (such as reduced teaching loads), but overall science is a hobby at work and the real benefits are simply your name on papers, grants and patents.  this may be best illustrated by DNA sequencing, which is named after the scientist who developed the technique – Sanger sequencing – and despite his recent sad death, Fred Sanger’s name will live on because of this.  Very few scientists make money from their research, but it can be fun!  However, there is a final aspect of all of this that should be the main thought of any budding scientist – RESEARCH IS OWNED BY SOCIETY – the benefits that come from research are unpredictable and varied, but the technological progress of recent years is one example of the benefits of good science and the description of the research should be freely available to everyone.

 

Proteins, Peptides and Amyloids – Alzheimer’s Disease

if you have read my recent science blogs you will be aware that I have an interest in Alzheimer’s Disease based on work involving protein aggregation.  A recent article by Bhattacharjee and Bhattacharyya (Journal of Biological Chemistry, 2013. 288(42): p. 30559-30570) brought back a result obtained in my lab many years ago and got me thinking about how small peptides can affect protein aggregation.

So, first the unexpected result from many years ago:

Marker proteinAt the time we were studying a small peptide called Stp, which was able to switch off a complex restriction and modification enzyme called EcoprrI, but the researcher carrying out this work made an unexpected observation that Stp peptide, when added to a group of proteins of different sizes (used as size markers) altered their apparent size and even aggregated many of the proteins (you can see this aggregation in the wells at the top of the gel).  This peptide was found to be able to inhibit certain protein-protein interactions (later we realised that is how it prevented the restriction enzyme from working), but clearly it could also affect the behaviour of other proteins in a gel.  The effect was primarily aggregation, but the result made me think at the time that maybe amphipathic peptides might also influence, even disrupt, protein-protein interactions.  We had just observed that the EcoR124I restriction enzyme could dissociate as a means of controlling function and I wondered if Stp would enhance that dissociation – low and behold Stp did indeed disrupt the subunit assembly of EcoR124I, and EcoprrI; we had demonstrated how the anti-restriction activity of this small peptide worked.

And so, secondly, to the recent observations with amyloidosis:

AmyloidAlzheimer’s Disease is initiated by protein aggregation when β-amyloid (Aβ) peptide oligomerisation into fibril structures that eventually form plaques within the brain.  Disruption of these aggregates would be a very important treatment for Alzheimer’s and is an area of intensive research.  What Bhattacharjee and Bhattacharyya have shown is that a small peptide, found in Russell’s viper venom, not only destabilise the Amyloids, but is also stable in blood for up to 24 hours.  This is a very interesting and promising observation that should stimulate the study of the effect of peptides on protein-protein interactions and perhaps lead to a non-toxic version of the peptide that could be used to treat Alzheimer’s.

Sometimes, it is very interesting how one piece of science can stimulate interest in another, as illustrated above, but also shows how diverse areas of research can sometimes be linked.  Great ideas are not always the result of hard work, but more often arise from interactions between different researchers – keep collaborating people!

Update – Nov. 2015:

In the latest issue of Scientific American under the heading “Advances” they report an article in Nature from work at UCL where autopsies of several patients who died from CJD (the human version of “mad cow disease”, which they acquired from infected growth hormone treatment) where they found evidence of amyloid formation associated with Alzheimer’s – at too early an age for natural onset.  Further work suggests that amyloid precursors, or small clumps of the beta-amyloid, may act in the same manner as prions do in the onset of CJD and lead to Alzheimer’s disease.  It would seem to me that the time is now ripe to begin a serious study of protein miss-folding, aggregation and conformational changes that may trigger these disorders.

Synthetic Biology – will it work?

Eng_Future_Logo_OutlinesEvery now and then science comes up with a new approach to research that impacts on technology, but often these approaches are controversial and the headlines we see are far from the truth and can damage the investment into the new techniques.  One good example is the Genetic Modification of plants and the production of GM-foods, which has a really bad press in Europe despite many obvious benefits for the world economy and for disease control.  The latest technology, which follows from the explosion in genetic engineering techniques during the 1990s, builds on concepts developed in bionanotechnology and is known as Synthetic Biology.  But, what is Synthetic Biology?  Will it work?  And what are the dangers versus benefits of these developments?  Gardner and Hawkins (2013) have written a recent review about this subject, which made me think a blog on the subject was overdue.

My background in this area is two-fold:

  1. I was a part of a European Road-Mapping exercise, TESSY, that produced a description of what Synthetic Biology is and how it should be implemented/funded in Europe.
  2. I was also Project Coordinator for a European research project – BioNano Switch, funded by a scheme to support developments in Synthetic Biology, that aimed to produce a biosensor using an approach embedded in the concepts of Synthetic Biology.

So, what is Synthetic Biology?  I think the definition of this area of research needs to be clearly presented, something that was an important part of the TESSY project, as the term has become associated simply with the production of an artificial cell.  However, that is only one small aspect of the technology and the definition TESSY suggested is much broader:

Synthetic Biology aims to engineer and study biological systems that do not exist as such in nature, and use this approach for:

  • achieving better understanding of life processes,
  • generating and assembling functional modular components,
  • developing novel applications or processes.

syntheticBiologyThis is quite a wide definition and is best illustrated with a simple comparison – in electronic engineering there exists a blueprint (circuit diagram) that shows how components (resistors, capacitors etc.) can be fitted together in a guaranteed order to produce a guaranteed result (a device such as an amplifier).  The Synthetic Biology concept would be to have a collection of such components (DNA parts that include promoters, terminators, genes and control elements; cellular systems including artificial cells and genetically engineered bacteria capable of controlled gene expression; interfaces that can connect biological systems to the human interface for useful output).  This would mimic the electronic situation and provide a rapid mechanism for assembly of biological parts into useful devices in a reliable and predictable manner.  There are many examples of such concepts, but the best known is the Biobricks Foundadtion.  However, at the TESSY meeting I was keen to make it clear that there are fundamental problems with this concept, so what are the problems?

At its most simple concepts a Biobricks database would consists of a number of different types of DNA (promoters, are short DNA sequences that switch a gene on; terminators, are short DNA sequences that switch a gene off; control elements, are DNA sequences that control the promoter switching on or off a gene as required; genes, would be DNA sequences that produce Recombinant DNAbiotechnologically useful products; and cells, are the final package that enables the DNA to do its work and produce the required product), which sounds logical and quite simple.  However, biological systems are not as reliable as electronic systems and combinations of promoters and genes do not always work.  One of the major problems with protein production, using such artificial recombinant systems, is protein aggregation resulting in insoluble proteins that are non-functional.  In addition, there are many examples (usually unpublished) of combinations of Biobricks that do not work as expected, or if used in a different order also result in protein aggregation, none of which ever happens with electronic components.  The reasons are far from clear, but are closely related to the complexity of proteins and the need for them to operate in an aqueous environment.  My thoughts about how to deal with this situation is to have a large amount of metadata associated with any database of Biobricks, which includes information about failures or problems of protein production from specific combinations.  However, I am not aware of any such approach!

Synthetic CellThere are other aspects of Synthetic Biology that do not depend on Biobricks and one example is the artificial cell.  The ideal for such a system is a self-assembling package, capable of entrapping DNA, capable of replication and survival and able to produce useful biomaterials and significant steps have been made toward such a system.  However, one area of concern as such systems are developed, is containment – can we really be sure these artificial microbes will remain in a contained environment and not escape to interact with and possible change the natural bacterial population.  However, the power and capability of such a system should not be underestimated and the likely use in future medicine could be immense – simple examples would be as delivery systems for biomaterial that can activate cellular changes by targeting to the required cell and then switching on protein production (e.g. hormones).  This type of targeted medicine would be a major breakthrough during the later part of this century.

SEN25_BIO11Another type of Synthetic Biology involves the artificial assembly (possible self assembly) of biomaterials onto an artificial surface in an way that is unlikely to occur naturally, but provides a useful device – I see this as more like what a Biobricks project should be like – such a system is usually modular in nature and the bio-material would normally be produced using recombinant techniques.  The research project I mentioned earlier involved such a device and the outcome was a single molecule biosensor for detecting drug-target interactions at the limits of sensitivity.  The major issues we had with developing this device was the precise and accurate attachment of biomaterial, to a surface in such a way that they function normally.  However, overall the project was successful and shows that a Synthetic Biology approach has merits.

What are the benefits that Synthetic Biology can provide society?  Well, one advantage is a more systematic approach to biotechnology, which to date has tended to move forward at the whim of researchers in Academia or industry.  Assuming the problems, associated with protein production, mentioned above can be better understood then there could be a major boost in use of proteins for biotechnology.  In addition, Synthetic Biology techniques offer a unique opportunity for miniaturisation and mass production of biosensors that could massively improve medical diagnosis.  Finally, artificial cells have many future applications in medicine, if they can be produced in a reliable way and made to work as expected:

  1. They could provide insulin for diabetics.
  2. Be made to generate stem cell, which could be used in diseases such as Alzheimer’s and Huntingdon’s.
  3. They could deliver specific proteins, drugs and hormones to target locations.
  4. They could treat diseases that result from faulty enzyme production (e.g. Phenylketonuria).
  5. They could even be used to remove cholesterol from the blood stream.

However, there are always drawbacks and risks associated with any new scientific advance:switch%20off

  1. Containment of any artificial organism is the most obvious, but this enhanced by the possibility of using the organism to produce toxins that would allow its use as a biological weapon.
  2. The ability to follow a simple “circuit diagram” for protein production, combined with a readily available database of biological material, could enable a terrorist to design a lethal and unpredictable weapon much more complex and perhaps targeted than anything known to date.
  3. Inhibit research through a readily available collection of materials that prevent patent protection of inventions.  This could be complicated by the infringement of patents by foreign powers in a way that blocks conventional research investment.
  4. Problems associated with the introduction of novel nano-sized materials into the human body, including artificial cells, which may be toxic in the long term.

My own feeling is that we must provide rigorous containment and controls (many of which already exist), but allow Synthetic Biology to develop,  Perhaps there should be a review of the situation by the end of this decade, but I hope that the risks do not materialise and that society can benefit from this work.

Shale gas development

shale_extraction_diagram_464This not my subject, scientifically speaking, but I thought it worth a quick mention having read a recent paper on surface water contamination (Olmstead,et al.  Proceedings of the National Academy of Sciences, 2013. 110(13): p. 4962-4967) and having noted the government’s recent tax break for shale gas development, I thought I should at least make a few comments.

Shale gas occurs as methane (natural) gas bubbles within pores in deposits of shale (produced by pressure and heat acting on deposits of organic material, but at pressures and temperatures too low to produce oil or coal) and is extracted by a process known as Fracking.  As illustrated, this process involves a deep drill into and along a shale deposit and then the introduction of a water/chemical fluid that leads to fracturing of the shale and subsequent release of the trapped gas.  In the USA the chemicals used are proprietary and their composition a secret, which lends an air of mystery to the dangers that involve this process.

The linked paper shows results that raise concerns about surface water contamination associated with both spillage (accidents) and the drilling process, but more importantly with off-site waste treatment and above ground land management.  This aspect of the process would concern me in the light of recent flooding in this country and our overall inability to clearly manage surface water in a controlled manner.  Unless the government begins to manage water, flooding and drainage better I foresee a new problem developing around any onshore shale drilling projects.  It is always important to weigh the benefits to society of a new energy source against the environmental impact of the development of the energy source, but too often money and profit is the driving factor!

There needs to be a more open and informed approach to this type of energy development where the public can make a sensible judgement of the impact and have some say in the rate of development.  Although the government has indicated that Fracking in this country will be less secretive and more regulated, but that does not address many concerns.  There is no doubt that Fracking will be a major industry in Britain very soon and the political driver of securing gas reserves, that are not subject to external political whims, will be a major factor.  Opposition to the process must be driven by concerns that are backed by good science, which is a major problem as there are insufficient scientific papers examining health concerns associated with the Fracking process, but organisations such as Greenpeace may well adopt a political agenda that will not work against the economic drivers for using this gas (a similar approach has delayed but cannot prevent the use of nuclear power and Greenpeace need to “see the bigger picture” of the need for energy that cannot be met by energy saving).carboncycle

In conclusion, I think Fracking is a messy industry that will be coming to your area soon – shale deposits in the UK are widespread and numerous – it may lead to cheaper gas (although I doubt that), but it should make us, as a country, less dependent on external fuel supplies, but I am concerned that there will be spills and accidents that may have a long term environmental impact that we are as yet un aware of!  One last interesting observation is that the burning of existing (known) fossil fuel supplies will already make us exceed existing limitation on carbon usage!  So, the use of shale gas can only make worse the problems associated with the greenhouse gas carbon dioxide.