Starlings swarming before migrating south: even though we know how the individual components in a system works, it can be impossible to predict how they will behave in groups; emergent behaviors might arise, that are never seen when looking at just one bird - or gene - alone.

Aiming for Adaption – an essay on germline gene-editing

Every year the Oxford Uehiro Center for Practical Ethics has an essay competition for students. I submitted the essay below, which didn’t get selected, but having spent some time on it, I am sharing it here, in the hopes that somebody will find it interesting.

Aiming for Adaption


In an increasingly interconnected and complex world, how should we make decisions?

I will explore this question in the context of the debate surrounding germ-line engineering of the human genome, as the deliberate enhancement of humans is becoming increasingly feasible. When re-imagining the human species, what values should guide our actions and laws?

Given our increasing ability to change lives, societies and our planet in extreme and lasting ways, the choices we make today will have a greater impact on future lives than those made by previous generations. It is critical that we get them right. I will argue that we need to incorporate more diverse solutions into our scope of thinking about genome engineering. Unfortunately, in both the medical sciences and elsewhere, there is an increasing tendency to apply methods that lead to the illusion that only one solution exists to any given problem. When designing human genomes, this approach might bring short-term gains, but I will make the case that when facing a complex problem, such as inferring which traits will give coming generations the highest welfare, we need to aim for adaption, not perfection.

Invitation to the GP-write: genome-project write, a meeting for scientists in May 2017 on how to "print" or write, a whole human genome.
Invitation to the GP-write: genome-project write, a meeting for scientists in May 2017 on how to “print” or write, a whole human genome. From


While we might have stumbled into the Anthropocene by chance, we are now approaching a time where we can apply our power in a more deliberate manner (1). Editing the human genome is in many ways the ultimate test for the Anthropocene man and woman, and due to technical advances, it is one that we will likely face very soon. Gene editing techniques will permeate and become standard in reproductive medicine, for example, when correcting debilitating genetic diseases in embryos used for in vitro fertilization. The relevant question concerning our children’s genetic design will shift from “should we do it?” to “how do we do it right?”. It will be hard to argue against these treatments. The physician will have prevented a disease before it arises, the parents will have given their child a better chance in life, and a healthy child will now be allowed to develop (2). Yet if we accept changes to some genes, why not change them all? The question of how to do it, and to what degree, is what our generation needs to contemplate. In this essay, for the sake of argument, I will assume that it is possible to engineer the human germ-line and that this can be done in a safe manner with little inconvenience for the parents (3). Fully capable then, how should we proceed?

It is tempting to relegate the question to one best made by the parents and leave it at that. Parents roughly know which environment their child will grow up in, and might reasonably be able to pick traits that will be important for a good life. This seems insufficient though; traits selected for, while beneficial for the child, might be in conflict with other ethical considerations valued in society (Savulescu and Kahane, 2009). Even if some axiom could be formulated which rational parents would follow, it underestimates the scope of our power. We are no longer in an era where mistakes affect us only in one time period or in one locality. If the Anthropocene parents unwillingly make a bad decision, it will not only affect the child or family; that decision will perfuse through the genes of generations. Decisions become intertwined and far-reaching, and it therefore seems fair to suggest that the surrounding community ought to have some say in those decisions.

Unfortunately, even what constitutes “bad” decisions might not be immediately clear. Historically, when we have modified other animals or plants, we have bred towards idealized versions of these species, aiming for largest utility (in say crop return) or elegance (in say beauty of an animal). Medical decision-making is no different. Once an approach has been shown to work, it is considered unethical not to apply the same treatment to everyone. Indeed, once gene signatures have been identified and successfully incorporated into a child’s genome, such as for increased intelligence or happiness, it is unlikely that a parent, doctor or local authority would allow further changes to that signature. This approach naturally leads to a more uniform genetic makeup, which would improve the individual, but could leave our race as a whole susceptible to sudden shifts in our environment, the same way our homogenous food supply is now vulnerable to climate change (4). This generates an obvious dilemma: how do we conserve the biological resilience we have paid for throughout millennia, while improving the health and welfare of our children?

There is another complication, which is one that is far more difficult to approach. The human genome is not deterministic. While we might be built from a script, we are, in the words of Siddhartha Mukherjee, “built to go off-script” (Mukherjee, 2016). The human genome is not only complicated, as for example an airplane engine is, where if we only understood each component, we could anticipate its function. It is also complex in the sense that the mixing of genes leads to emergent properties that did not exist before. Changes to already existing genes or introduction of completely novel functions would not be isolated from these interactions. Indeed, they would be an active part of them. The prediction of large-scale changes might be possible for immediate generations, but would become impossible when considering longer time scales, with the slight, but real possibility of catastrophic failure in welfare or genetic fitness.

Starlings swarming before migrating south: even though we know how the individual components in a system works, it can be impossible to predict how they will behave in groups; emergent behaviors might arise, that are never seen when looking at just one bird - or gene - alone.
Starlings swarming before migrating south: even though we know how the individual components in a system works, it can be impossible to predict how they will behave in groups; emergent behaviors might arise, that are never seen when looking at just one bird – or gene – alone.

Aiming for Adaption

These considerations profoundly affect how we should approach engineering the human genome, or other complex systems. They hint that neither universal maxims nor predictions of future welfare can be expected to fully address dilemmas arising from our newfound abilities. Instead I propose a soft principle on which we can stand: rather than aiming for perfection, we must aim for adaption.

This entails not setting rigid objectives, but fluid ones. When confronted with an unpredictable system, the best approach is not one of uniformity, but one of variation. Instead of aiming for one idealized image of man, freed from all genetic faults, we must imagine something much more muddied: versions of ourselves that are variable, perhaps even more so than we are naturally today, in the hopes that some of these will flourish. We must accept change, errors and blunders to a higher degree, and if we do want to edit our genes, blind ourselves to a certain extent. We should not pursue the “best” gene signature, but “incomplete” ones that include some diversity. The experiment of the Anthropocene should not be to excel in some specific traits, a Sisyphean task that will fail, but instead to identify broad environments within which our genes can evolve, and to identify many of them.

What could these environments be? Other have suggested that when enhancing humans we should perfect our intelligence, creativity or our morals (Savulescu et al., 2011). These traits could well be defined as part of the core constituents of that which makes us human. If these were to be the domains we engineered, we should be ready to accept that they will change considerably, possibly in detrimental ways. Perhaps a better environment would simply be the general health of humans, with the understanding that this is the best soil in which core constituents could thrive and leave it to the individual to excel beyond that.

This seems a miserable conclusion: as we approach the peak of our omnipotence, we are tied down by our knowledge that either we will fail in our goal of perfection or, if we accept the aim of adaptability, that our work will often be in vain. Perhaps there is some solace in the fact that there might be normative claims to support this aim. A number of people have defended both diversity (Aurenque, 2015) and disabilities (Garland-Thomson, 2012) as qualities that are intrinsically good. Or perhaps we will come to understand our blinding as a form of a self-imposed veil of ignorance, ensuring a just society (5). Personally, I find it valuable to live in world that is quirky and strange. This is not because it is necessarily a better place, but because it does provide the variation from which novel possibilities arise, something that a perfect world would not.


It is easy to attack what I am suggesting here: for one, many of the problems I have highlighted could easily be solved simply by ensuring that every human in every generation were genetically designed. Alternatively, we could reintroduce imperfect genomes, if things start to go wrong. Another obvious problem is the sacrifice of our children’s excellence for inferiority. Finally, it has been suggested that we can get around the complexity of nature by applying certain heuristics to our engineering (Sandberg, 2009).

All these are important interjections. I would be a very poor proponent of my own aim, if I did not at least give them room to be considered or tried out. It is worth noting though, that most of these objections would not be this generous in return. This seems to me a general trend: whether solutions are proposed by algorithms, doctors or politicians, they are increasingly monocultural, to the detriment of diversity and hence emergent ways of living. This is a great shame, not least because solutions that are adaptable are showing their force in many other domains; counterintuitively, error-prone and messy solutions outperform optimal ones when approaching problems in economics (Arthur, 1992), conflict-zones (Sagarin et al., 2010), climate change (Verweij et al., 2006) and even general decision-making (Johnson et al., 2013), most likely because all these domains can be considered complex systems as well. It seems that the possibilities inherent to variation somehow bring out the best in humans, a good we should be careful not to throw away. As we begin to engineer humans, it will be important that we retain this gift.


(1) Here I will use the word Anthropocene to define the era in which human activities significantly impacts biological systems.

(2) That is not to say that you could not argue against this. I merely do not think these arguments will be successful in practical political terms.

(3) A number of techniques are good candidates for allowing this level of control. Here I will suggest just one: it is technically feasible to precisely synthesize (i.e. write) DNA in large amounts and paste them together. It was recently proposed to use these methods to print a human genome (Boeke et al., 2016). The genomes of both parents can easily be sequenced (i.e. read), and thus a genome from both parent’s genomes could be inferred in which one could correct mutations or insert novel functions, before it being synthesized and inserted into an egg.

(4) It’s worth noting that any trait that gives even a slight survival advantage will quickly permeate through the genetic makeup of a population, whether or not we actively introduce it in an individual.

(5) Luciano Floridi has suggested that John Rawls’ ’veil of ignorance’ is actually a veil of uncertainty (Floridi, 2015).


Arthur, W. B. (1992). On Learning and Adaptation in the Economy. Santa Fe Institute, 1–31.

Aurenque, D. (2015). Genetic diversity as a value: imposing fairness. Am J Bioeth 15, 18–20.

Boeke, J. D., et al. (2016). The Genome Project-Write. Science.

Floridi, L. (2015). The Politics of Uncertainty. Philos Technol.

Garland-Thomson, R. (2012). The case for conserving disability. J Bioeth Inq.

Johnson, D. D. P., et al (2013). The evolution of error: error management, cognitive constraints, and adaptive decision-making biases. Trends in Ecology & Evolution.

Mukherjee, S. (2016). The Gene. Penguin Random House.

Sagarin, R. D., et al. (2010). Decentralize, adapt and cooperate. Nature.

Sandberg, A and Bostrom, N. (2009). The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement.

Savulescu, J., and Kahane, G. (2009). The Moral Obligation To Create Children With The Best Chance Of The Best Life. Bioethics.

Savulescu, J., et al. (2011). Enhancing Human Capacities. Wiley-Blackwell.

Verweij, M., et al. (2006). Clumsy solutions for a complex world: the case of climate change. Public Administration.

Forest plot showing survival ratio’s in ggplot2 by emulating Fivethirtyeight’s theme

I am interested in what a certain gene’s overexpression means in terms of cancer patient survival. A standard way of visualizing this sort of data is using Kaplan-Meier curves, as shown below. But I really wanted an overview of many cancers, not just one, to be able to quickly see what effects the expression has across the board. You could insert multiple Kaplan-Meier curves next to each other – some papers do that – but I was interested in a more visually pleasing and informative approach.

Kaplan-Meier curves, where patients have been stratified based on their expression of the gene TP53. It’s hard to visualize multiple cancer-types this way. Data and images from PROGgeneV2 (1).
Kaplan-Meier curves, where patients have been stratified based on their expression of the gene TP53. It’s hard to visualize multiple cancer-types this way. Data and images from PROGgeneV2 (1).

I thought I would use a so-called forest plot, by visualizing the hazard ratio – i.e. the increase chance of death caused by overexpression of the gene – instead. These plots are normally a bit boring, but Nate Silvers team at Fivethirtyeight have created some beautiful visuals that I thought I would try to emulate in R. Now this isn’t great, but it’s a beginning. Here’s the R code:

And here’s the result:

A forest plot created in R with ggplot2, attempting to emulate Fivethirtyeight’s theme.
A forest plot created in R with ggplot2, attempting to emulate Fivethirtyeight’s theme.

I like this visualization better, as you can easily see and interpret the effects across a number of cancers. I think I’ll try and add in the P-value and numbers as well (later). The code is based on Stephen Turner’s code, with some tweaking. Survival data is easily available through the PROGgeneV2 website.


1. Chirayu Pankaj Goswami and Harikrishna Nakshatri. ”PROGgeneV2: enhancements on the existing database”. BMC Cancer, 2014.

2. FiveThirtyEight website. “Who will win the presidency?”. Link:

If a PhD is like a marathon…

I sometime hear that doing a PhD is like doing a marathon. If thats so what tricks from running applies to your PhD? Here’s my attempt at stretching the analogy as thin as possible:

How not to end your PhD: Robert Cheruiyot slipped across the finishing line in the 2006 Chicago Marathon and had to be treated for internal bleeding.
How not to end your PhD: Robert Cheruiyot slipped across the finishing line in the 2006 Chicago Marathon and had to be treated for internal bleeding.

It’s your run
The most important bit first: this is your run! Many people fret about what their supervisors are (really) thinking about or what their motives are: you shouldn’t care. You are the runner. It’s your ass on the line and at the end of the day and you will gain the glory. Think you are heading the wrong direction? You are the runner. Think you need to go up the hill instead of taking the easy, but dodgy, scientific route? You are the runner. That means you set the pace – i.e. you have to learn when to say yes and no – and that it’s your responsibility to defend the direction you are going in.

To do a marathon you have to run
Doing a PhD will make you sweat. It’s hard work because you have to juggle your time, complex collaborations and tricky work, all on a route that’s not clear – heck, often it’s completely shrouded In clouds (1). It will be frustrating. Simply accepting this can make your life easier: yes it’s hard, but there’s a finishing line out there somewhere.

Learning how to run
That doesn’t mean it’s impossible – even better, it’s something that can be trained for. Runners train all the time: they set up a schedule and try to stick to it. Importantly, they don’t start by running the full marathon, but slowly increment their distance. You should do the same. Try to increase both your speed and distance as you progress, but not at the same time. Remember to decrease your distance every few weeks to allow your muscles – and brain – to regenerate.

What do I mean? First of all, a PhD is mainly about scheduling (2). Think about and identify areas in both your research and personal development you want to improve, but remember you can’t do everything. Make a list, a diagram, anything, prioritize and see how fare that gets you. Outsource stuff. Say no to things. Setting up short-term goals – for example at the start of the week – will make your work more gratifying and it means you don’t have to fuzz about what to do all the time. You don’t always achieve the time or distance you set out for. That’s life. But if you hit a wall, it’s time to utilize your supervisor and colleagues and make a new schedule. Philip Guo has perfectly illustrated when you need to “course correct” your research in this video.

Learn when you have to work fast and when you just have to work a lot. Try to be faster than you were before: many great discoveries were made when somebody cut all the corners they could. Of course, also, many were not. Learn how to work long-term – what’s required to setup complex experiments? But be careful: if you try to be both fast and go long hours, you will fail, or worse, get injured (3).

Learn from others: talk to everybody and spend time reading (perhaps a paper a day?). Learn how to think creatively about your work. Finally, copy the good runners and their techniques. Almost everybody is willing to share their own running stories.

Enjoy yourself – and have a good time
We need to change the culture that most research happens in (4). Marathons – and science – are filled with people who are focused, fast and only care about themselves. Don’t be that person, because at the end of the day, you won’t enjoy your run. Instead, try to run in groups or teams as least once in a while: it’s more fun when you work together and I am sure there’s a plethora of evidence showing that you achieve more this way. You also need somebody to chat, complain and just generally shrug about everything to. They will carry you through the race when it gets hard. There’s a fantastic quote somewhere on twitter: “Science is filled with brilliant people. Stand out by being nice.” I absolutely believe that.

These guys are having a fun PhD.
These guys are having a fun PhD. Photo Credit:

You should also celebrate every single thing you can think of! For all the hard work we do, science is filled with rejection, so we need stuff to cheer us up. If somebody from your lab gets a prize, gets selected to give a talk or finally gets that funky experiment working, whip out the Champaign and party! Give them a clap, a hug and celebrate. If you are not drinking some sort of alcoholic beverage once a week, you are doing it wrong.

Go all in on equipment
Why not make your life easier? I bought a couple of programs to semi-automate the planning and reading I needed to do. 30 dollars might seem like a lot, but if it saves you a couple of hours every week, it’s well worth it. I also set up a ton of keywords to filter out all the unless information that will flood your inbox – probably best idea I ever had. Fight for basic equipment you need to make your day easier: nobody would start a marathon in 3-year-old shoes, and you shouldn’t start your PhD with a 3-year-old computer or a broken chair.

Remember to stretch
If you do not stretch, you will get injured. I remember reading this when I first started running, and shrugged it off. After a weird array of injuries, I now religiously stretch before and after a run – and before I do any science.

Personally that means booking time off every 3 months for a prolonged weekend or week, a trip home or go visit some new spot of the world. I also insist on one day a week where I force myself not to do anything science related, at all – no email, no articles, no nothing. If I am going to spend the next 10 hours at work, I might as well start the day of in a relaxed manner: for me that means spending 20-30 minutes in the morning with a good cup of mocha and a book or newspaper. It’s the best! I also starting running – ironically – as I found it de-stressing.

The end
You made it to the end – thanks for reading. I starting thinking about these things about a year ago, when I was utterly depressed about how my research was going and considered quitting. Something needed to change. I started reading different blog posts about the PhD life and they helped a lot: “this can actually be thought!” I realized. Most of the ideas here are inspired by those others. I am hoping that perhaps somebody will find these quirky notes useful as well. Good luck on your run.

From "Message to a Graduate" by Grant Snider at
From “Message to a Graduate” by Grant Snider at

(1) See Uri Alon’s brilliant video on “Being in the cloud”.
(2) Sherri Rose has written a extremely good piece about being an efficient PhD-research. Highly recommended.
(3) Jennifer Walkers has powerfully written about depression in academia.
(4) Labmosphere is a great initiative “dedicated to life satisfaction in the area of academic sciences”.

An R script for scraping news from Nature journals to Capti Narrator

Recently I found a very neat program called Capti Narrator, which does a pretty good job of reading texts aloud. It works on an iPhone and it’s free. Well this is great I thought – perhaps finally I would be able to catch up on some of all those articles I want to read?

Well it turns out Capti isn’t great for full-blown journal articles. But: I do find it excellent for all the extra stuff journals are filled with, like the perspectives and news content – with a little tweaking. Here I’ll share the script (code here) I made for improving the listening experience and for automatically scraping all the “news and views” articles from a Nature journal. The end result sounds like this:

I had a number of issues with Capti “out of the box” for scientific texts:

  • It tends to read the text in one big blob – as a listener it’s hard to figure out where the paragraphs are or something is actually a title or figure legend.
  • As Capti is reading, it will read all references out loud as well as URL’s and other stuff you basically don’t care about.
  • For these reasons, for a good “listen”, you often need to copy-paste the text into a text-file, where you check that all paragraphs are neatly lined up – and then combine articles by hand.

Now, I thought all these things were a little annoying, so I made an R script which basically does the following:

  1. It scrapes all the “news and views” and “research highlight” articles from a current issue of a Nature journal, in my case Nature Immunology and combines them in one text file.
  2. It puts a distinguishing text in front of the title, author, abstract, the date and each paragraph. This way, when a text is read aloud, the program will read “Title: ‘Central tolerance: what you see is what you don’t get!’”, put a “New paragraph” before each new subsection and so on. It makes a little easier to listen to the text and follow along.
  3. It scrapes all the paragraphs from the articles and removes a number of things: all links, all the weird line-breaks and all the references. It also remove the figure texts.
  4. Finally it saves it all in a text file in a folder of your choice: since Capti can sync with Dropbox, I just save it here and voila – an hours worth of listening is ready.

The voice you listened to above is called “Joey” (which cost 4 dollars) – for the amount of weird scientific words in your standard journal article, I think it does a pretty good job

I’ll write a detailed post about the code later, but here are a few notes:

  1. There are plenty of things that could be done to further automate this: like going though a list of all your favorit journals, always finding things you haven’t listen too, ect – but for now, it’ll just give you one journal at a time.
  2. This only works with Nature journals.
  3. I used the rvest package for most of the scraping, but reverted to regex expression for the text itself – I know, I know! – you aren’t suppose to do that. But it turned out it was a lot easier to remove all the references and links this way, since they are in distinct html tags.
  4. For the text manipulation I used another Hadley Wickham package: stringr package – and this awesome free book from Gaston Sanchez’s website called “Handling and Processing Strings in R”.

The code can be found here.

Generating an article network using rentrez and igraph in R

One thing I often find myself trying to do is to read up on relevant literature within a new field. Often it’s difficult to identify seminal papers as a new-comer, so I’ve often wondered if there would be a way to quickly identify key papers.

Over Christmas I read two really good articles on creating networks in R: David Robinson’s “love actually” network and Katherine Ognyanova’s excellent guide to networks in R. I decided to give network analysis a shot myself,  at first harvesting articles from PubMed and then visualizing their connections, using the R packages rentrez and igraph. The idea was to identify highly cited papers as a way to guide my reading.

The code I ended up with can be found here.

Basically what I did was the following: 1. download or scrape information on all other articles citing one “seed” article from PubMed. 2. Loop through all of these articles and find all articles that cite them. 3. Iterate for a couple of times (or levels as I call it in the code). 4. Visualize and export the data.

All articles citing a seed articles are found (iteration 1). All articles citing them are then found (iteration 2), and so on. Over time highly connected articles in the network will emerge - here highlighted in red.
All articles citing a seed articles are found (iteration 1). All articles citing them are then found (iteration 2), and so on. Over time highly connected articles in the network will emerge – here highlighted in red.

I used the excellent ROpenSci package rentrez, which utilizes E-utilities from NCBI to generate a nodes and links list for use in igraph afterwords. I loop through the nodes and edges in my script, but there are probably smarter ways to do this – it’s definitely slow to loop through the pubmed ID’s like this – but it worked.

In the end the result I got was this, using a Nature Immunology article from 2002 with 4 iterations:

Sample network based on 4 iterations from "seed" article. All articles with over 30 citations are marked in red.
Sample network based on 4 iterations from “seed” article. All articles with over 30 citations are marked in red.

In total the script found 4162 unique articles with some 5387 edges between them. The average number of citations of each paper was 9, but the distribution is obviously heavily skrewed. In the graph I’ve highlighted the nodes with 30 or more citations. All node sizes are also dependent on number of citations – and as outlined in the code above, you can easily create a reading list from these. Another approach was to identify the top 10 last authors within the network, which is also included in the code.

A couple of notes:

  • The for-loop for scraping PubMed are somewhat slow – probably there’s a better way to to this.
  • PubMed summary information only shows references from Pubmed Central articles, not all referring liteterature. This means the network won’t be complete, but I hope the highly connected nodes are representative.
  • The cutoff number of 30 used above was somewhat randomly picked – here I basically set it to find a manageable number of articles to read.
  • I struggled the most with visualizing the layout in igraph. For the figure above I used the Fruchterman-Reingold layout algorithm, but it’s worth to play around with both layout and the edge and vertix settings. Katherine Ognyanova’s guide was great in explaining how to setup the graph – be sure to read it!

It’s a lot of fun to play around with igraph which has a lot more functions then I am using here. For example it would be cool to identify clusters within the network, like Mark Thorton beautifully did using book styles, or use another matric the just incoming connections to identify key nodes (i.e. key articles).

Tools and tips for reproducible science from Rich FitzJohn

Rich FitzJohn, a computational biologist and director of puppets, made a very nice slideshow of how to produce reproducible science using a few tools revolving around R. It’s a great read!

You can download the slideshow here, but the gist is in this slide I think (which I have stolen from the slideshow, hope it is okay):

Tools for greating reproducible research with R
Tools for greating reproducible research with R

He uses an example from one of his own studies on wood which is very instructive – the makedown, knitr generated analysis page really shows how transparent and reproducible science can be.


1. Slideshow “Reproducible research – current challenges and future prospects” by Rich FitzJohn

Scientific calculator on the desktop

Emulation of the good ol' TI-83 on a mac
Emulation of the good ol' TI-83 on a mac

I love my old TI-83 calculator but it is getting increasingly… well weird. So I was pretty excited to learn that it is possible to setup an emulator of my good old companion on the desktop instead, using a software called Wabbitemu.

Here’s what you do:
1. Download Wabbitemu for mac
2. Download a rom-image from your calculator. There is a good guide here on how to do it. Unfortunately you need a pc. Alternatively, you can try and google them – but I am not certain these are legal.
3. Open the rom-image from Wabbitemu.

As it turns out, there are a number of other emulators for other platforms listed here.

Near-haploid and haploid cell lines

Haploid vs dipoloid
Haploid vs dipoloid

Haploid mammalian cell lines are useful for forward genetics experiments, the idea being that new phenotypes (ie induced by mutations or knock-out) would be exposed immediately.

I haven’t been able to find good lists of such cell lines, so instead I have compiled a non-exhaustive list here. Very few are available, even though haploid cancers are known to arise in humans and some authors mention that many cell lines become hypodiploid or even near-haploid as they are passaged. If anybody knows of more, I would be happy to know.

Name Cancer Non-haploid chromosomes Reference
KBM-7 Chronic myeloid leukemia 8, 15*, Y* Kotecki et al, 1999
HAP1 Derived from KBM-7, but fibroblast-like Carette et al, 2011
NALM-16 Acute lymphoblastic leukemia ? Kohno et al, 1980
MMLAL Myeloma 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15, 16, 18, 19, 20, 21, 22, X Wong et al, 2013

* Varying descriptions as to whether these chromosomes are diploid or not.

1. Haploid vs diplod image from Wikimedia user Ehamberg.
2. Haploid genomes illustrate epigenetic constraints and gene dosage effects in mammals, Epigenetics & Chromatin 2013, Leeb and Wutz.

Citat om døden

Torch Race with Prize Hydria; Three Youths, c. 430 BC-420 BC
Torch Race with Prize Hydria; Three Youths, c. 430 BC-420 BC.

Citat fra Michel de Montaigne:

“Go out of this world as you entered it. The same passage that you made from death to life, without feeling or fright, make it again from life to death. Your death is part of the order of the universe; it is part of the life of the world.

Our lives we borrow from each other… and men, like runners, pass along the torch of life.”

Citat fra “The Swerve” af Stephen Greenblatt.
Billede af vase fra Harvard Art Museum.