COVID-19 and the categorical imperative

I recently asked on Twitter:

And got reassuring results:

Then I asked another question. Have a think about how you might answer it.

Here is how Twitter responded:

Quite a difference! Not surprising perhaps. The risk of passing on COVID-19 is lower by a factor of 12,000. Nevertheless, I think that the good people of Twitter are wrong, and will explain why a little further down.

The people of Twitter are not alone though, and these sorts of decisions aren’t just thought experiments. The UK government first required people with coughs or fever to isolate themselves on 12 March, long after the virus arrived in the UK. This was part of the government’s strategy of introducing “the right measures at the right time”, and Sir Patrick Vallance gave this as an example, saying that prior to 12 March a very small proportion of people with symptoms would have COVID-19, so it would not make sense to isolate them.

I’ve seen a similar argument made against mask use. After lockdown a small proportion of people will have COVID, the argument goes, so the chance of any particular mask stopping an infection is tiny:

We do this sort of risk analysis all the time. When you go somewhere in a car there is some small chance you could cause an accident in which someone is hurt, but if that risk were 12,000 times higher you would never drive again.

Even some experts appear to agree:

Image
From The Times, 21st April

But this argument does not apply for a contagious virus.

Investigating with a simple model

Practically, someone who stays home in Scenario A (when there is a 99.999% chance they have a cold), has the same impact as someone who stays home when there is a 12% chance they have COVID. One way to see this is to simulate an epidemic, and imagine that we institute an intervention for only a week, either early in the epidemic or late in the epidemic.

Let’s consider the mass-masking scenario. Without any interventions, each infected person passes the virus on to 2.5 additional people. We’ll exaggerate the effect of masks and imagine that everyone wearing masks reduces that number (R) to 0.8. In reality this magnitude of effect could only be achieved with a package of many interventions. But it makes it easier to see what’s going on with a graph.

If we plot the epidemic with a logarithmic y-axis, then the steepness of the line is a reflection of how many people each infected person passes the virus to. So if we don’t implement any interventions, this graph goes up with a constant slope.

What happens if we wait until lots of people have the virus, and then bring in masks for 2 weeks to try to stop them passing it on? Well, it’s as you’d expect. The graph goes up as normal, then goes down as R falls to 0.8 for 2 weeks, then starts going up at the same rate again, but reaches a lower point after 80 days than without the two week intervention.

In this case we decided to bring in the 2 week intervention only after at least 1% of people were infected. This seems to make sense. Surely there is no point in making people wear masks when only 0.001% of them have the virus? But let’s check, what happens if we bring in the two weeks much earlier in the epidemic?

It turns out that the two weeks early in the epidemic has exactly the same effect on the number of infections on day 80! What matters is that it has the same effect on R in either case. Even though a far lower absolute number of infected people are wearing masks, R is reduced to 0.8 in both cases.

Ethics: how should we each act in the time of COVID-19?

As COVID-19 was spreading in our communities, many people advocated for social-distancing measures. One model for doing this is to think about the absolute chance you have of spreading the virus. For example, someone made a graph to discourage people from holding large events by showing that if there are a lot of infections in the community there is a good chance that large events will cause transmission events:

Image

Interestingly this graph suggests that if one holds a large event when there is a low level of COVID in the community, there is a less negative effect than when there is a high level. But we’ve seen from the models above that this isn’t correct.

So how can we decide how to shape our behaviour? I think it makes sense to follow Kant’s “Categorical Imperative“.

Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.

So imagine if everyone acted as you intend to, and work out what the effect on R would be. If everyone wears masks then, regardless of the level of COVID in the community, R is reduced. If everyone holds large events then, regardless of the level of COVID in the community, R is increased.

In the absence of evidence on seroprevalence and immunity, we must aim to keep R below 1 until we have a vaccine. This will drive the level in the community down to near zero. It will allow vulnerable people to go about their lives as normal. However to achieve these levels we must sustain our measures even as levels in the community drop. Even as we believe that only 30 people in the whole country are infected, we must all wear masks. It’s counter-intuitive, but it will work.

Community testing for COVID-19: reaching 25 million tests per week

Until recently, the strategy for COVID testing in many countries has been focused on people who have a significantly likelihood of having COVID-19. The need to impose unpleasant social distancing measures, and ultimately complete lock-downs, trades off against how much testing you do in the community. With sufficient community testing, you could suppress COVID in a population without any social distancing measures. This post is an outline of one way one might go about testing 25 million people per week in order to achieve this.

Decentralised testing

Reaching this sort of scale quickly is likely to require a decentralised approach levering existing organisations. The most obvious candidates to do this are schools and larger businesses. Thirteen million people are employed by businesses with more than 50 employees, and there are twelve million children in schools. If these businesses and schools could test everyone within them each week, 25 million tests would be carried out.

It’s important to note that this sort of testing is completely different from the testing of symptomatic, or likely infected, individuals. Symptomatic testing rightly requires specialist training, uncomfortable nasopharyngeal swabs, and full PPE for the person testing and the person being tested. This is because there is a high risk of exposure to COVID-19, and because getting a test result wrong could have major implications for care.

Testing random individuals from a population with low incidence of COVID-19 poses no greater risk than interacting with them in any other way, as colleagues or students. And in this setting a significant rate of false-negatives or false-positives can be tolerated without major consequences.

Testing procedure

In this proposal all of these businesses and schools would be supplied with: simple nasal swabs, frozen plates containing COVID-19 RT-PCR mix, a basic one-button thermocycler, a blue light transilluminator.

The employee responsible for testing takes one of the frozen plates out of the freezer and thaws it.

Students and employees go up, one by one, to the testing station. Each person takes a nasal swab and swabs their nostril (front-of-nose, not nasopharyx), then places the swab into a well of the plate, and cuts off the swab’s shaft and discards it. They also write their name on a sheet in a box representing this well, then wash their hands.

Once the plate is full of samples the employee responsible for testing takes the plate. They seal it, load it into the one-button-thermocycler, and press the button. The thermocycler runs an RT-PCR programme for an hour.

The employee takes the plate out of the machine and examines it with the blue light transilluminator. Any positive wells (including a pre-loaded control well) will glow brightly. Negative wells will be dull.

If the employee notices any positive wells (a very rare occurence) they immediately look at the sheet of names and contact the person in question to ask them to report to public health authorities for confirmatory testing. If this test comes back positive then contact-tracing begins.

The aim would be to test each person at these schools and businesses once each week.

Costs and availability

The reagents for this test cost much less that £1 per tested person.

A thermocycler is a simple device which can be currently be purchased for £3,200, and likely manufactured for much less. Unlike ventilators, these could be relatively easily produced by other manufacturers. There would be a definite need to scale up production to achieve this scale.

A blue light transilluminator can be made for less than £40 from commonly available materials.

It seems likely that, despite their low-cost, global supplies of RT-PCR enzymes are insufficient for this scale of testing at present. But there is no reason I am aware of to think that this could not easily be scaled up. The quantities of primers needed are readily available.

There might be a need to scale up the production of the DNA-binding dye.

All of these costs are minuscule compared to the economic losses averted by preventing a lock-down.

Controlled infection with COVID-19 as a safer means of establishing herd immunity

Update: it seems (although clarity is lacking) that the UK has moved away from the herd-immunity strategy. I very much welcome this. It renders the argument below irrelevant, except as an illustration of the weaknesses of a strategy that seeks to vaccinate a population using an epidemic.

Disclaimer: I am neither a virologist, nor an immunologist, nor an epidemiologist. This post is written with confidence, because too many caveats hinder clarity, but should be understood to be the thoughts of a layperson.

This week the policy of the UK government on COVID-19, under advice from scientists, has been articulated clearly for the first time. The government intends for around 60% of the population to be infected with COVID-19 in the anticipation that after the majority who survive recover, the population as a whole will have ‘herd immunity’ to the virus, since if only 40% of the population can pass on the virus it will be unable to spread.

One can argue about the wisdom of the policy, which is in disagreement with the position of the WHO and many other countries. This post is not about that. What I am interested in is: given this policy (to allow 60% of the population to become infected), what is the least bad way to achieve this? For instance, should you let the virus spread at random through the population or should you actively infect people with the virus under very controlled conditions to minimise the risks?

The current approach

The government’s approach is to allow the virus to spread with relatively few restrictions initially. At a point before it overwhelms the healthcare system, it will introduce further social-distancing measures to attempt to “flatten the curve” and spread the 40 million infections over a longer time. There will also be some measures to try to ensure that the most vulnerable – older and immunocompromised people – are isolated, but it is unclear how effectively this can be achieved.

Risks of the current approach

Infecting the vulnerable. ~20% of the UK population is over 65. The fatality rate appears at least 10x higher in this demographic group, hence the government’s attempts to isolate them. At a point where a substantial portion of the population has the virus it is likely to be very difficult to achieve this. If the 60% of the 3.2 million people over 80 became infected, and the fatality rate of 14% for this age-group from China applied then that would represent 270,000 deaths in this age group alone.

Overloading the healthcare system. Exponential growth is a powerful adversary and small miscalculations could result in overloading the healthcare system with the result that many people die simply because there are no ventilators available to keep them alive. There is concern among some experts that it will not be possible to flatten the curve enough to ensure that  everyone receives adequate care. Exponential growth is difficult to avoid in natural infections because people who get the virus will not initially know that they have it, and so will unknowingly spread it to others.

Viral load. There are reasons to expect that the more particles  of virus a person is exposed to initially, the more likely they are  to experience severe disease. In the government’s approach there is no control over this and so it is possible that young and healthy people may develop severe symptoms due to exposure to a large viral load.

An alternative approach

We have seen that the government actively intends for 60% of people to get the virus. It is almost as if it were to simply go and infect people, as at a chicken-pox party.  So why not actually do this? Could there be advantages to actively exposing 60% of people in a controlled way to the virus to generate the same herd immunity?

Advantages the controlled exposure approach

Not infecting the vulnerable  – only fit and healthy people would be selected to receive the virus. Almost no children under 10 have developed severe symptoms from COVID-19. It would therefore seem very safe to put all immunocompetent children into the 60% who get the virus. Other young people are also comparatively resilient to the virus, and so a utilitarian argument would suggest they would  be the safest people to inoculate with the virus. We will see below that this approach may be safer for the people  who get infected, than the alternative policy of letting the virus pass through the population.

Not overloading the healthcare system  – there are two ways in which this controlled approach would avoid overloading the healthcare system. Firstly, by avoiding exponential growth: those infected would know they had been infected and could be isolated afterwards until they were no longer shedding virus. Once resistant, they could go out into the population, and safely work in crucial roles where they were likely to be exposed to the virus. Secondly by reducing the hospitalisation rate: because these people would be drawn from a much more resilient population the chances of them developing severe symptoms would be much lower. The end result of this would be fewer people in hospitals, meaning that the small(er) number of people who did develop severe symptoms would be able to get adequate care. The end result might be that a 100% chance of being infected, and receiving adequate care if required, might be safer than a 60% chance of being infected with the possibility of developing severe symptoms in a degraded healthcare system, which is dealing with exponential growth and vulnerable patients.

Controlled viral load, and possibility of an attenuated strain – there are several lines of evidence to suggest that the amount of virus one is initially exposed to may be a determinant of whether someone develops severe symptoms. By controlling the infection one could minimise the amount of virus someone is exposed to (while ensuring they develop an immune response). This  might again substantially reduce the risk compared to a 60% chance of receiving a random viral load (determined by how near you are to an infected person and how much virus they are shedding). Finally, although proper “no-risk” vaccines are unlikely to be available for some time, my non-expert brain wonders whether it is possible to rapidly develop a version of the virus that would be expected to be mildly attenuated by, for example, leaving the proteins the virus is encoding exactly the same, but altering the “codons” of DNA such that these proteins are likely to be produced less rapidly. The effectiveness of this could not be guaranteed, but on the balance of probabilities it seems very unlikely to be more dangerous than the wild-type virus.

Summary

The government are conducting an experiment in which  they will allow 60% of people to be infected. To my mind this is in substance the same as the government randomly choosing 60% of people and actively infecting them with a random load of wild-type virus, but not telling them they are infected with the result that they may spread it to vulnerable people. An alternative approach in which selected resilient people were infected in a controlled manner with a low-dose of an attenuated virus might be safer for the people infected, as well as society at large, than the current strategy.

Disclaimer

I am neither a virologist, nor an immunologist, nor an epidemiologist. I would welcome explanations of where this argument falls down from such people, or anyone else. Furthermore, I am not arguing for this approach as superior to a containment strategy, merely as superior to a strategy of natural infection. Finally, this is a suggestion for an organised governmental approach, not a DIY ‘chicken-pox party approach’ which would not control viral load and would therefore have great risk.

eMotion 5075 teardown

In my PhD lab we had an epMotion 5075 pipetting robot. I had a like/hate relationship with this machine. Like: it’s an impressive, precision-engineered, piece of hardware. Hate: the software is appalling. Writing protocols for it was slow, frustrating and generally awful, and there was a general lack of flexibility in what one could make it do.

Recently I heard that the lab was having a clear out, including disposing of this (pricey when purchased) robot and I asked if I could adopt it in preference to the scrapheap, which I was kindly allowed to. I’m not in a wet-lab at the moment so for now it will live in a garage, but I did want to have a peek inside to have a better understanding of how it works, and to work out whether it would be possible to customise it to be more flexible.

If I were buying my own scientific hardware I would always go for the upstart companies like OpenTrons and Incuvers which tell you how their hardware works and allow you to do whatever you want with it. With the epMotion, by contrast, if you want to use new labware you have to send a physical version of it to the company which they measure to generate a proprietary calibration file.

I was given some hope that it might be possible to customise the robot from this video, in which someone has replaced all the electronics of the robot with a standard board for a CNC machine:

But other than that I could find very little on the internet about what is inside these robots. I think that’s a shame, and now I have one at my disposal, without a warranty. So here is a run-down of what happens when you take it apart, in case it is useful to anyone in a similar position.

First steps

The back panels come off very easily with a hex-key and expose the computer that runs the machine. This runs some version of Windows, maybe Windows CE. It has USB and ethernet ports although to my knowledge with my version of this robot these can’t be used for anything useful. In general I doubt there is any easy way to make this computer do anything other than what Eppendorf has programmed it to do, without access to the underlying source code.

Removing the top required in my case removing a little bit of double sided tape at either side, in addition to two hex-key bolts.

There is a heavy-duty belt for the X axis with a big stepper motor.  My robot had been essentially unused for several years and the rail over which the X-carriage runs had become covered with a sticky substance. This caused the motors to stall mid-run, but cleaning them off with some alcohol resolved this issue.

The computer that is the brains of the operation – unfortunately unlikely to be easily repurposable.

Basics

Each of the X, Y and Z axes is controlled by a stepper motor (the X-axis one is this). They each have optical endstops with 4 wires. In the video above these endstops have been replaced with mechanical switches but it really should be possible to use them as-is.

X-belt and optical end-stop

Y-axis stepper motor, belt, and optical end-stop.

Cabling

One of the challenges of making a many-axis robot is that signals have to be carried to each successive axis, all of which are connected together. So flexible cabling is needed -but at the same time it has to not get in the way or fall into the samples. In the case of the epMotion this is carried out with ribbon cables like this:

But it quickly becomes apparent that this cable doesn’t have enough wires to be simply directly connected at the other end to stepper motors / endstops / etc. Instead it seems that this is some sort of serial cable that carries data signals to a series of other microprocessors, one on the robot’s pipetting arm, and one for each of the Y and Z axes, which then interface with the Y motor, Z motor, the tool locking motor, the pipetting motor, the tip-ejecting actuator, and the range-detector.

If you want to hack this thing you’ll have to decide whether you want to have to make and mount 4 separate pieces of control hardware, or to replace the cabling with a much thicker set of wires.

Pipetting arm

Lurking under the metal cover of the tool arm is a profusion of electronics. There’s a lot to do. An (infrared?) sensor to measure distance, and actuation of grabbing a tool, identifying it, pipetting up and down, and ejecting a tip.

Selecting/using tools

One of the very impressive things about the epMotion robot is its ability to change tools during operation. It can choose from a variety of single channel and multichannel pipettes, and even a plate gripper.

Tools

How does this process work?

The tool arm has two coaxial motors. One is, I believe, a simple DC motor with a very low gearing. It rotates a bit of metal internal to the arm which causes it to firmly grip whichever tool it is currently over. I’m not quite sure how the robot knows when this rotation is finished. My suspicion is that it detects the change in current flowing through the motor when the motor stalls at the end. Certainly if you disconnect this motor, the robot is able to detect that ‘the engine is not responding’, and informs you so.

Looking up at the inside of the tool gripper to see how it works.

When one examines the pipettes themselves one notices they have electrical contacts, but these are simply used to tell the robot which tool is in what place. The pipettes are in fact mechanical rather than electronic devices. They all have the same rotatable top-piece, and as this is spun by a stepper motor in the tool arm they aspirate/dispense liquid (or in the case of the gripper, grab and release). As this piece is rotated the tool begins to extend a rod out from it. Inside the tool-gripper this rod must make contact with a switch, and this is used to “home” the pipette to ensure the robot knows the position of the plunger.

Homed tool with thin rod extended to make contact with switch. Electrical contacts for tool ID visible to the right.

Prospects for customisation

I’m going to pause my hardware work here, because it isn’t yet clear exactly what the application of the robot will be for me and I don’t want to destroy any necessary functionality.

If I had continued I would have one way or another tried to marry up the epMotion hardware with the open-source OpenTrons robot-control software. This basically means adapting the hardware such that one knows how to control it and then writing a custom driver for the OpenTrons software.

I do think this is completely achievable. The video above already shows how 3-axis control is possible, using a standard CNC board. Controlling aspirate/dispense as a fourth axis should be similarly simple. If my understanding of how the tool interlock works is correct than that also wouldn’t be too challenging – one would just need to measure the current flowing through the motor. An even simpler strategy would just be to keep one tool locked onto the machine.

One decision one would have to make would be whether to have a single control board and have lots and lots of wires running to the tool-arm, or to use the existing ribbon cables and have a separate controller on the tool arm controlled over serial. I suspect the latter might be the better approach.

More generally, if I do this I will have to consider whether I want to be limited to expensive epMotion robot tips, the only ones compatible with any of these tools. I suspect the answer is no. In that case I might end up bolting an OT-2 electronic pipette to the pipetting arm, though this again loses the advantages of tool-changing. Or maybe I’ll go with something completely different like a vacuum pump and a peristaltic pump – we’ll see.

In general none of this looks trivial, and one is almost certainly better off just buying an inexpensive OT2. Still, it’s nice to have a better understanding of what is going on inside this intricately engineered machine.

 

Update:

It has just occurred to me (another useful reason for writing things down) that there may be an easier and less invasive way to get control of this thing. If one can reverse engineer the serial control that the computer uses to control the Y-axis, Z-axis, tool interlock, aspiration tip ejection (and distance measuring) then one can get control of all of these without messing with their hardware. It seems possible that this could be achieved relatively simply (if they are sent in a text-based format) and when I have access to the machine again in 6 months time I will investigate. The 8 leads in the ribbon cable could be: V+, GND, Y-out, Y-in, Z-out, Z-in, pipette-out, pipette-in.

BigGAN interpolations

The state of the art in image generation is BigGAN.

Now, some trained models have been made available, including the capacity to interpolate between classes. I made a colab to easily create animations from these.

They are pretty fun.

What is more, they make it clear that the latent space clearly captures very meaningful shared properties across classes. The poses of quite different animals are conserved, and “cat eyes” clearly map onto “dog eyes” during interpolation. These sort of properties suggest that the network ‘understands’ the scene it is generating.

Continue reading “BigGAN interpolations”

Adventures with InfoGANs: towards generative models of biological images (part 2)

In the last post I introduced neural networks, generative adversarial networks (GANs) and InfoGANs.

In this post I’ll describe the motivation and strategy for creating a GAN which generates images of biological cells, like this:
Continue reading “Adventures with InfoGANs: towards generative models of biological images (part 2)”

Adventures with InfoGANs: towards generative models of biological images (part 1)

I recently began an AI Residency at Google, which I am enjoying a great deal. I have been experimenting with deep-learning approaches for a few years now, but am excited to immerse myself in this world over the coming year. Biology increasingly generates very large datasets and I am convinced that novel machine-learning approaches will be essential to make the most of them.

At the beginning of my residency, I was advised to complete a mini-project which largely reimplements existing work, as an introduction to new tools.  In this post I’m going to describe what I got up to during that first few weeks, which culminated in the tool below that conjures up new images of red blood cells infected with malaria parasites:
Continue reading “Adventures with InfoGANs: towards generative models of biological images (part 1)”

How I stumbled upon a novel genome for a malaria-like parasite of primates

Sometimes in science you come across things that are definitely interesting, and useful, but which you don’t have time to write up properly for one reason or another. I’m going to try to get into the habit of sharing these as blog-posts rather than letting them disappear. Here is one such story.


Not long ago I was searching for orthologs of a malaria gene of interest on the NCBI non-redundant database, which allows one to search across the entire (sequenced) tree of life. Here is a recreation of what I saw:

I was surprised to see that nestled among the Plasmodium species was a sequence from a species called Piliocolobus tephrosceles. Continue reading “How I stumbled upon a novel genome for a malaria-like parasite of primates”

Saving 99.5%: automating a manual microscope with a 3D printed adapter

TL;DR: Some 3D-printing hackery can create an automated microscope stage from a manual stage for ~0.5% of the cost from the manufacturer.


I have always wanted access to a microscope with an automated stage. The ability to scan an entire slide/plate for a cell of interest seems to unlock a wealth of new possibilities.

Sadly, these systems cost quite a bit. The lab I work in now has a Leica DMi8 microscope with automated movement in Z. But XY movement is (on our model) still manual. It is possible to purchase an automated XY stage for this microscope, but the list-price quote is around £12,000 (including stage, and control hardware and software).

I’m not going to argue that this price is unreasonable. I am sure that the manufacturers of scientific equipment spend a lot of time and money innovating, and that money has to be made back by selling devices which have relatively small production runs. Nevertheless, the result is that the costs of kit that makes it to market are fairly staggering – and this prevents someone like me from being able to play around with an automated stage.

But I still wanted to experiment with an automated stage! So I wondered how easy this would be to do myself. After all, we have a manual stage, and we move it by rotating two knobs. Couldn’t I just get motors to turn those instead of doing it with my hand?

As I thought this through further I realised it was slightly complicated than this. Firstly, the knobs are co-axial, making them rather harder to deal with than would be two separate shafts. And secondly, as you rotate the X-knob, the shaft moves in X.

So the motors need to be able to move with it. But they also need to be to rotate and exert a twisting force on the knob – so they need to move linearly but be locked in one orientation.

Hardware: 3D printed pieces, 2 stepper motors and a RAMPS controller

I made a quick design in OpenSCAD

Basically the first knob,which controls movement in Y, is simply connected to the mechanism by a (red) sleeve which connects to a motor below. The knob above, which controls movement in X, is placed inside a (blue) sleeve which covers it in a gear. That gear is turned by a (turquoise) gear turned by a second motor. Both motors are mounted on a (transparent) piece which also connects them to a LM6LUU linear bearing which allows them to slide but keeps their orientation constant.

I printed out these 3 pieces – then tweaked the dimensions a little to be more snug on the knobs and printed them again. The final STL files, and the SCAD file that generated them are available on Thingiverse.

To control it I connected the steppers to a trusty RAMPS 3D printer controller. These cost £30 with a screen and a rocker controller (the Leica hardware to control a stage is ~£3k). Since the 3D printer controller is also all set up to control the temperature of a hot-end and a heated bed, if you want to add warm stage down the line this might be ideal.

Initial tests controlling the position using the system using the RAMPS controller went well, and let me calibrate the number of steps per micrometer.

Software: MicroManager

Regrettably, the Leica software isn’t going to allow you to easily hook it up to an Arduino-based controller. But, as ever, open-source software comes to the rescue. Micro-Manager is a very advanced ImageJ plugin that can connect to the Leica camera, and to the microscope itself to control filter cube positions, Z-focusing, etc.

Don’t expect quite the user-friendliness of Leica software from Micro-Manager, but do expect a wealth of packages to perform common operations in automated-microscopy (Leica charges ~£2.5k for the software to revisit a position multiple times – which was included in the quote given above).

Theoretically, MicroManager even allows you to control XY position using a RAMPS controller – someone has already written a package for exactly this board. This step, which should have been trivial, was actually the most complicated. The device adapter is designed to ask the RAMPS controller for its version, and somehow I could never make my board submit a response that the software was happy with. I had to download the MicroManager source and remove the code that checked the version. Successfully setting up the build environment for Windows took an age. Do get in touch if you have a similar project and want the DLL I built [update: DLL here, I offer no guarantees at all that it will work. This is an x64 build which will only work with a recent nightly build] [update 2: Nikita Vladimirov has followed up on this and released the changes he had to make to MicroManager]. Anyway, to cut a long story short I got MicroManager to talk to the RAMPS board successfully.

Testing by making a 100X oil immersion slide scanner

Now to put it into practice.

I wrote a Beanshell script to scan a slide in X and Y and capture images. In this case I captured images in a grid 40 microscope images wide by 30 microscope images high, for a total of 1200 images.

This took a few minutes – try doing that by hand.. Then I stitched them together with the MIST plugin. The result is a 27,000 x 12,000 pixel image, featuring a whole lot of red blood cells. You can zoom in on the version below. This was taken with a 100X oil immersion objective, at which magnification the smallest motion of the stage is a substantial fraction of the image, but still allows enough overlap for stitching.

Fun! Still a bit more experimenting to do, but I’m hoping to get this acquiring tagged proteins from 96-well plates.

Caveat for anyone who tries to implement this: obviously be very careful not to create significant non-twisting forces on the coaxial knobs – you don’t want to damage your stage and ruin the alignment.