Junior Doctors

So, junior doctors in the UK have been striking for a while. Anyone reading this is probably familiar with it, or if not, they’re familiar with Google.

I’m passionate about my support for our junior doctors, and all our NHS staff in the UK. Particularly junior doctors right now.

First off, I’m 36. 18 years ago I found out I had Type I diabetes. Without medical science, I’d have been dead in pretty short (and horrible) order. Junior doctors didn’t diagnose me or treat me shortly thereafter, but every consultant that did used to be one. I also got diagnosed two weeks before my A-levels, and I got put back together and got through those A-levels with flying colours, which not only kept me alive but has been the foundation of my life since. I think I owe my life in two senses to the NHS. I’d probably have survived in other healthcare, but would my life have been not just saved but my future preserved?

Since then I’d been pretty lucky in managing things well, with some blips. I’ve never (in those first 18 years or second 18 years) ended up overnight in a hospital. My consultants and GPs and nurses have been fabulous. I’ve rarely actually had cause to run into junior doctors in the NHS myself.

I also want to say that during my undergraduate and postgraduate education, I had medical students around me as friends and flatmates. No students worked harder than them. They worked themselves to the bone, and that was before they qualified. I could not have done it. It would have broken me.

I’ve seen a couple of junior doctors personally. One, when I saw them, was clearly pretty worn down. But when I fell apart just from a sense of relief of having someone I felt I could talk to about a problem that was already basically resolved, but had been eating me up, all they cared about was me. I never saw them again after that, and I wish I had, because I never got the chance to say thanks. Just a chance to get it all off my chest was what I needed, and they were that rock I needed. I imagine the junior doctor and very hospital-based lifestyle means patients are often gone and not seen again before they recover to the point they can say thanks properly. If you’re one of those doctors who has never seen a patient again, it’s probably a good thing and they’d give you a bloody big hug now if they could, because all those things you might think are an everyday thing are, for the patients, anything but. They can be lifesavers.

Other doctors I’ve seen have again been clearly worn down, and have still somehow mustered the mental energy to fight against an underfunded and poorly staffed system to get their patients what they need. I don’t know how they’ve done it. I’ve seen some fight so hard, while so tired themselves, they’re miracles.

The NHS hasn’t been perfect for sure. I’ve seen situations where I’ve felt like I’m stuck in an episode of House and the patient just hasn’t been ill enough to demand a cranky guy with a team of crack doctors, and it’s been obvious where the focus has been on getting limited money where it’s been most effective. I’ve had doctors misdiagnose me, but others have rediagnosed and by virtue of experience gained as a junior doctor. One thing I’ve never felt I’ve needed is a “7 day” NHS (quote marks, because I’ve got service on the weekend that’s been fabulous).

I’ve been walking through a hospital and run into and old but not particularly close friend working as a doctor, and seen their own personal exhaustion flip almost instantly into personal concern for me, but I was only emotionally knackered and his colleagues were already doing an outstanding job on the patient I cared about. Mate, if you are reading this, I know you’re not a close friend or anything but just that change in expression when you saw I looked miserable was the world of difference to me. I think you know who you are.

It’s a career path where people constantly, almost completely without complaint give themselves to the community. Junior doctors, and their more senior colleagues (or to put it another way, future selves) give everything from the moment they commit to a medical degree, and stick with it through years of training. I simply could not do it, and I admire them.

I would aspire to be as good a person as them, but I don’t have it in me. I literally don’t. I could not do it. I delivered some biscuits and chocolates to a junior doctors picket line today and perversely the doctor I passed them on to said this:

I delivered biscuits in a bit of a rush while changing trains. With all due respect, I think we can all agree that’s just a touch ridiculous. I’m not the hero. I’m alive, and I owe my career, and that life, and my happiness, and frankly my sanity to you all.

I try to be a good person and I mostly don’t do a terrible job. Jeremy Hunt though, has worn me down. All that above, and the way he’s treating doctors, I can’t get myself to be a good enough person not to hate him. I don’t want to dwell on it but he makes me a worse person, and I can’t help that. Doctors almost universally seem better than that, despite all the disagreements you have with him.

Why do I love all those doctors so much? Well obviously for all the reasons above, but more importantly than that because they’re the complete opposite of that. I’d rather aspire to be as good as them, be as positive about life, and be as caring than anything else. They make me healthier, but more importantly they make me better. They’re everything we should all be and more, and they deserve our support, and they deserve to be happy and healthy when helping the rest of us, and this contract imposition will not make that happen.

Junior doctors – you’re heroes, and I’m sorry that most of the time when you have done your job your patient isn’t always ready to tell you what a difference they’ve made. I think calling you the backbone of the NHS is an understatement, when the NHS is also the backbone of the nation. I feel really lucky to have swapped just some flipping chocolate biscuits for a BMA “I support junior doctors” badge, and I’m going to wear it with pride.

Bad physics demonstrations

Reposting old content here, as I saw one of these come up again recently. It’s some blabbering about a couple of bad physics demonstrations:

The first is a very common description of gravity as described by Einstein’s Theory of General Relativity (GR). GR essentially describes gravity as due to the curvature of spacetime, and that curvature as being due to matter. Almost as soon as someone starts to try to explain it they use the rubber sheet analogy. Now I link to that page to describe the analogy, but that page does not do a bad job of using it. Essentially the analogy is that you have a rubber sheet, and a ball on that sheet curves it, and the curvature of the sheet affects the motion of other objects upon it. Now, I’ve no criticism of that page. There are two major problems with this analogy.

  • The ball deforms the sheet – why? It’s because the weight of the ball is pushing down upon that sheet. What gives the ball weight? Gravity, which is the thing this analogy is trying to describe. Now that’s not too serious, you can simply point that out and say that the mechanism by which mass curves spacetime is not described – it simply does so.
  • If you roll a small ball slowly past the larger ball it curves in and rolls toward the big ball. This is because the small ball rolls downhill. Why does it do this? Because of gravity. This is a serious problem. There’s a path deflection due to the geometry of the sheet, but there’s also a path deflection because of a preexisting downward force upon the ball. This is almost never pointed out, and most worryingly the effect exactly looks like what people generally think of as a gravitational attraction – a movement towards rather than a deflection of a path from what would naively be considered straight.

Professor Brian Cox, using this description in Wonders of the Universe, I’m looking at you.

When you first studied magnetism at school, what demonstrations of it do you recall? I bet that a very early one was putting a sheet of paper over a magnet and sprinkling iron filings over it. You get something like this image – iron filings lining up upon field lines. You’re then also shown a diagram like this showing discrete field lines.

What’s wrong with this? Well, field lines don’t come in discrete chunks. They’re continuous. Every point in space has a magnetic field line passing through it, and the field lines do not vary in strength in some onion-skin like way. There’s nothing special about where those iron filings are lining up. The field lines are no more existing in a particular number than the field lines of Earth’s gravitational pull exist in particular places, rather than smoothly over the entire surface. I bet you that every kid comes out of that lesson thinking magnetic fields look something like an onion. I did, and it took me a disturbingly long time to figure out that they weren’t, and why, because noone ever corrected that misconception.

What is actually happening is that every iron filing is itself becoming magnetised and is drawing adjacent filings towards itself. It’s like they’re concentrating the field where they are. The filings are an active part of the field – they’re not what a physicist might call a ‘test particle’ that doesn’t affect the things around it and only traces out some physical phenomenon.

This is really problematic. People come away from seeing this thinking that magnetic fields are hairy.

Anyway, as with the older post of this, I’m always interested in hearing about better demonstrations of anything that’s usually shown in a dodgy way.

The facts from Safer Medicines

I’ve long had issues with the Safer Medicines (SM) campaign, from back when they were “Europeans for Medical Progress” (EMP).

They’re a group that would like us to stop animal testing because, they say, it leads to unsafe results. They’re quite explicit in wanting to discuss this away from the ethical issues surrounding animal testing.

I want to be quite clear at the start, I don’t have any issue with anyone who thinks the current animal testing that is done on medicines is unethical, and if someone wants to do away with it on those grounds they seem quite reasonable to me, even if I’m personally not in that category. Where I stand is that I would love it if we could do away with animal testing, but I’m not convinced that we can without increased risk to people and that if we can minimise animal suffering enough then there’s a case that it is acceptable. It seems to be the current situation that animal testing is unfortunately necessary. I certainly feel that animal testing needs to be avoided wherever possible (and I expect the overwhelming majority of the scientists doing animal testing would feel the same), and I would like it if someone like Safer Medicines could come along and convince me that the era of needing animal testing is clearly over. The problem is that so often what they say is not only unconvincing, but give statistics on things that end up on examination meaning absolutely nothing.

The first time I came across SM was a letter in New Scientist. It was very possibly this one (from back when they were EMP). Let’s take a look at the main bit of evidence they provide:

There are serious scientific objections to primate experimentation, the track record of which is in our view abysmal. Eighty HIV vaccines – 50 preventive and 30 therapeutic, according to the US National Institutes of Health – have failed in human trials following success in primates.

Right, so using that information can you tell me what percentage of potential HIV vaccines that would have been dangerous got caught by animal testing? Can you tell me what percentage of dangerous vaccines got through animal testing? Can you tell me what percentage of safe and effective vaccines failed animal testing? Of course you can’t. It’s just a scary sounding number. And there’s no indication in that letter what you’d replace the animal testing with, and how effective it would have been in comparison.

You can take a look at their letters page and although sometimes better evidence is presented, often it’s just as bad as that 2006 letter.

This is sadly pretty typical. Absolute figures are given about some number of failures of a some medicine in some trial, without the necessary other figures to draw conclusions about how effective animal testing is or isn’t.

This current gripe is inspired by the 2014 newsletter which I found in a Sainsbury’s supermarket in London, Feb 14th 2016. I hadn’t actually spotted the date at the time.

Anyway, let’s take a look at the evidence it presents – focussing on where numbers are given:

Page 1

A million people are hospitalised by their medicines every year in the UK, costing the NHS £2 billion

How many people and how much money would dropping animal testing, or bringing in new tests save? We don’t know.

Page 2

Several studies have calculated the ability of animal tests to predict adverse drug reactions. Estimates are often below 50 per cent. A recent study shows that animal tests missed 81 per cent of the serious side effects of 43 drugs that went on to harm patients.

OK, this is a bit better, but not a lot. First off, even a low percentage of an ability to predict a drug reaction is useful if you wouldn’t have detected it in another way. It’s not clear if another way is available from that information. Also 81 percent of the serious side effects of 43 drugs that went on to harm patients – well… imagine that there was a drug that produced serious side effects and it was spotted in the animal test. Do you think it is likely to go on to market?

Page 4

The Institute for Safe Medication Practices calculated that in 2011, prescription drugs were associated with two to four million people in the US experiencing “serious, disabling, or fatal injuries, including 128,000 deaths.”

As with page 1.

Page 5

FRAME (Fund for the Replacement of Animals in Medical Experiments) has recently published an analysis of the value of studies in dogs for predicting the safety of human medicines. The salient feature of this study is the use of appropriate statistical metrics, which have not previously been applied to such data. The results shine a new light on our reliance on dogs for this purpose, suggesting that they contribute little or nothing to ensuring our safety. The paper and a presentation by lead author Dr Jarrod Bailey can be viewed from our website.

An honourable mention here. I’d set out to go through looking for evidence with numbers. This just says there’s some, and gives a reference. So it’s an improvement, but I’m not sure I can count this as a win for SM.

A major study by a large consortium of researchers has revealed why every one of nearly 150 drugs tested in patients with sepsis (the leading cause of death in intensive-care units) has failed.

Well this looks pretty damning, but there’s no information on whether another method of identifying potential sepsis drugs would do better. Unfortunately there’s no reference to the paper, and only a mention that it was rejected from both Science and Nature – this isn’t necessarily a problem as they’re hard journals to get anything into, but the Nature Medicine comment lets you eventually find your way here. Reading the paper, the authors seem pretty clear in suggesting improvements to animal models and the development of new ones. There’s no indication in there that I can see that other currently existing tests would have been better. I’m also being pretty generous in looking at the paper at all, given most people picking up the leaflet probably wouldn’t know how to find it. At least it’s Open Access.

Page 6

Newly published research demonstrates the ability of BioMAP Systems, a unique set of primary human cell and co-culture assays that model human disease, to identify important safety aspects of drugs and chemicals more efficiently and accurately than can be achieved by animal testing.
Data from 776 environmental chemicals, including reference pharmaceuticals and failed drugs, were analysed as part of the US EPA (Environmental Protection Agency) ToxCast Programme.

OK, well it’s good if there are more effective tests for safety than animal testing, but all you’ve done in terms of numbers is say 776 got analysed, not what the results were. You did give a reference though, but I’m not sure the average punter reading the leaflet is likely to chase it up.

Page 7

Hepregen’s human ‘HepatoPac’ micro-liver is predictive of liver damage from fialuridine (a potential treatment for hepatitis B) – an effect that was not predicted by animal studies, resulting in severe liver damage in 7 of 15 people in the 1993 clinical trial: five of whom died.

This is terribly sad. However, it doesn’t tell us how many people have avoided horrible side effects from animal testing, and whether the HepatoPac detects any of those as well. Who knows, maybe the best thing to do is animal testing and HepatoPac testing?

The legacy of that approach is that despite a decade of effort using genetically modified mice, more than 300 potential treatments have been successful in animals but not a single one has proved effective in human patients.

First off, was there a more effective route at the time? Secondly, those 300 haven’t proved effective – it’s not clear if all 300 have been shown to be ineffective, or if they just haven’t made it through the testing within the decade in question.

So, to summarise – repeatedly we see essentially useless numbers given to us to try to support their case. It’s just depressing. Look again at SM’s aims. They want safe and effective treatments reaching patients fast – no-one could argue with that. They want open discussion – great, but I don’t see that that isn’t happening. They want independent testing of testing methods against each other – great also. They also say “The effectiveness of animal tests has never been measured against a panel of state-of-the-art techniques based on human biology.” – and that doesn’t seem to be a bad idea either. Then look at their evidence why, and again we see numbers given without the background to actually determine success and failure rates.

Maybe they don’t have them, and that’s a large part of why they want independent testing of methods, but I think it’s just disgraceful to put out scary sounding numbers which could (depending on the numbers of false positives, false negatives, and genuine positives and negatives) end up meaning all sorts of things in reality.

Useless information doesn’t help anyone reach the right conclusions.

Technical support

Sometimes, technical support is not of a very good quality. I have had a BT Broadband account for some time, which is generally very good and nice and speedy for me. However, I’m supposed to get free wireless on BT’s Openzone hotspot network. This has never worked for me.

One of my favoured pubs now uses Openzone for its hotspot, so I thought I’d better get that sorted. I log in, and request to talk to an agent. I’m asked to give an initial question, but it turns out the text field will let me enter quite a bit of text, but I can only submit 240 characters worth. This isn’t quite enough to put as much detail as I’d like in, but I condense it down to this.

I am unable to log in to BT Openzone although I understand that I should be able to as a BT Broadband customer. I am able to log in to my BT account on the web.

I wait a fairly short minute or so for an agent.

Ayush: Hello. I'm Ayush. Thanks for that information, I'll check it and get back to you in a moment.
Me: Hi, thank you.
Ayush: May I have your account number please?
Me: one moment, I'll try to find it
Me: Does xxxxxxxx seem correct?
Me: Ah yes, xxxxxxxx
Ayush: Thanks
Ayush: eddedmondson this is your bt id
Me: yes
Ayush: http://www.bt.com/mybt
Ayush: xxxxxxx?
Me: xxxxxxxx
Ayush: this is your security question
Ayush: http://www.bt.com/mybt please click on this link
Me: yes, I'm logged in there
Ayush: Have been able to login?


Me: yes
Ayush: I hope that I have resolved your query?


Is there anything else I may help you with?
Me: No it hasn't. I can log in on the website but I can't log in to Openzone wireless
Me: I'd already said in my initial question I could log in to the account on the bt.com webpage.
Ayush: The same login id and password will work for the openzone as well.


Me: But they don't.
Me: Please elevate this to a higher level.
Ayush has disconnected.
Jyoti: Hello. I'm Jyoti. Thanks for that information, I'll check it and get back to you in a moment.
Jyoti: You need to register yourself by going to this link to access BT Wi Fi.
Jyoti: https://www.bt.com/wifi/secure/index.do?s_cid=con_FURL_btwifi&utm_source=ATL&utm_medium=FURL&utm_content=A&utm_campaign=btwifi
Me: Great, thank you.
Me: I see it is refusing to accept either my email address or username, but I guess I can change my username to my email address. I'll get back in touch if that doesn't work.
Jyoti: Alright, no worries, you can try this and if it won't work you can contact us back later.
Me: Thanks, bye!

I go to change my username. Immediately bt.com tells me my username is already invalid, so there’s a good start. I change it to one of my email addresses. I go back to the link above, and it refuses to accept it. It tells me I have to have a BT email address. Fine, I go to bt.com again and try to get a BT email address, which I should get up to 11 of. I can’t, at least not trivially – I’d probably have to get back in touch with them or just phone up for more support.

I give up. It turns out however I can log in now, so all the advice I got was basically incorrect but at least it got me going in the right direction. I’m most annoyed at whoever decided BT usernames should all be email addresses without actually verifying that all current ones were, or enforcing the change, but that was pretty spectacularly crap help I got there, especially at first.

I scored 125% on this rationality quiz. How about you?

You may have seen this quiz which claims to measure and classify your style of rational thinking. There’s a nice enough io9.com article on it too. Feel free to take the time to go and do the quiz and read that first. As a warning, their site is extremely slow, and despite my first inclination when it took an age to load that doesn’t seem to be some kind of patience test for the quiz!

The quiz has some nice aspects to it, but a number of flaws, and I’ll explain how I got (or rather, *spoilers*, gave myself) 125% towards the end. I also got classed as a detective, which seemed pretty reasonable.

First flaw: there’s a question in which you are told you have been doing a number of essays for a school class. They’ve taken you something like 3, 2, and 4 weeks to complete, and the next one you expect to complete in 1 week, for various reasons. How much time do you budget for it?

To get the maximum rationality score, you have to apparently ignore the information that ‘you expect to complete in 1 week’ – an expectation that apparently has justifications, and if it is my expectation it certainly should! If I felt I didn’t have those justifications, I’d not be expecting to complete it in 1 week. So to get the ‘best’ score I have to make what I consider to be an invalid judgement.

Second flaw: there’s a question (actually several questions along this line) where you have to decide if you want an amazing meal cooked for you now, or one such meal in a year and another a year after that. Along with other delayed payoff questions, and presumably a measure for your consistency answering these, they make assumptions about you that may not be valid. You may rationally decide (as did I) that you would take the two meal option with the delay, but if the delay were not 1 or 2 year but 100 and 200 years, you’d obviously be mad to pick that. It’s tied to assumptions about how long you have to wait that require you to make additional levels of assumption about what the test writer intended. I can say the same kind of meta-reasoning for the first case I was annoyed by led me to think that I should answer with a 3 week budget – the average of past experience – rather than my actual lower answer,
but I chose to stick with my actual feelings in all cases.

Third flaw: I’m not sure this is a flaw. I’m unable to load the detailed results. Anyway, there are several questions which require you to assess how often you make mistakes, and whether you make repeated mistakes. I don’t think I answered any other questions irrationally, but I certainly said I made mistakes and repeated mistakes. I don’t believe that is irrational. Rationality is not about being right all the time, it’s about reasoning your way to maximising something. Being perfectly rational alone can’t reach any conclusions on how to act as it doesn’t have any motivation. You need some metric – some measure of happiness, or something you want to maximise. Doing that isn’t going to mean you get all questions right all the time. Even if your aim is to answer questions correctly, you probably won’t have good enough information to do that all the time.

It’s also said that you shouldn’t make the same mistake twice, but you certainly should if you made the first mistake for reasons that are still valid. If you make a mistake because your reasoning was flawed the first time, then you should fix that flaw and do something different the next time. I however am of course perfectly rational so my first mistake didn’t come about that way 😛

So I’m not sure how I only got about 75% score on this. I can only assume that 25% off 100% is because I thought more than the person who wrote the test, so I must be 25% more rational than what they call 100%, so I must be 125% rational.

This doesn’t mean that if you get 0% you got a perfect score though. There’s a flaw in reasoning that way…

Medical advances

I don’t often talk about it but I’m a type 1 diabetic – have been since just before I finished school when I was 18. Since then I’ve benefited from a few advances – new and better kinds of insulin for example which came out in the years I’ve had it. Insulin now is made from genetically engineered bacteria rather than the pancreases of dead animals, and that genetic engineering lets scientists twiddle the precise structure of the protein to change its action, resulting in both faster and slower acting insulins than normal human insulin would be when injected. This is great, and has led to a big improvement in the quality of my life. Obviously the mere existence of insulin as a treatment has not led to just improved quality of life but a dozen years of life I’d not have had at all otherwise.

I’m now two weeks into trying out another new piece of technology – a continuous glucose monitor. It’s just brilliant. Before I’d have to draw blood from my fingertips to check my glucose level (which is important – too low and I risk going unconscious, too high and I risk numerous health problems later in life). Now I have a device that is stuck to my arm, changed every 14 days, and lets me check my glucose level (with a few caveats) by using a handheld widget using near-field communication to the sensor. It’s totally lifechanging. Before testing the recommended 4 times a day was a real pain, and frankly I wasn’t doing it. Now it tests continuously (at least continuously as far as the timescale these things change on) and I can check it by pulling something out of my pocket, pushing a button and putting it near the back of my arm. It’s easier than checking Twitter or my text messages, and given how often I do that it isn’t surprising that I feel this is making a huge improvement to my quality of life. It’s quick, discreet, and effective. I check how things are going loads of times during the day, and I get an idea of not just the current value, but a rough idea of the rate of change, and the recent history. I absolutely love it.

The downside is that currently it’s not on the NHS. This may well change. I’m very lucky that I can afford to run one, and I hope every diabetic in the UK that wants one gets one on the NHS soon, but that’s something for the powers that be to assess (I’m honestly very grateful that I get such good service and free prescriptions from the NHS as it is).

If you’re curious, here’s the manufacturer’s site. I’m really grateful to researchers, doctors, and engineers that come up with these advances, and I’m not ashamed to say it even if it means the next time I challenge some alt-medder or anti-vaccer that they point at this and claim I’m some big pharma shill. I’m not paid to say this. I’m paying them a lot of money for this for now, but I genuinely think it’s worth it for me.

(If you’re diabetic and get one of these, look out for the Facebook group of users too – get in touch if you want an invite or whatever as it’s a closed group)

Andromeda Galaxy

I did this a while back but haven’t blogged it until now for one reason or another:
Andromeda Galaxy

It’s composed of 308 light frames of 20s each, taken with the Celestron 11″ with Hyperstar. As with other shots, longer exposures end up being saturated by light pollution (which is subtracted in this final image) so I’m still imaging with much shorter exposures than many astrophotographers who go into the timescale of minutes.

It was also the first serious outing for a new Orion Mini Guidescope with an Orion autoguider. It worked very nicely, and PHD2 is a superb bit of software to handle the job of operating the guider. For me it doesn’t have a massive impact on individual subs since my exposures are short, but it helps maintain the object’s position in frame over the entire integration and thereby maximises the amount of the camera I get to use in the end result – important for an object as large in angular size as Andromeda! You’d also not use it with a telescope as big as an 11″ normally since the guidescope is quite short in focal length, but with the telescope using the Hyperstar it too is quite short focal length and the combination works well.

There’s still some problematic stuff near the edge of the frame, showing up as a bit of a green tinge in the bottom right, but it’s not too much of an issue.

I’m very happy with this setup as you might guess! I might get this one printed…

Calibration success

So last time I blogged (a long time back for personal reasons) I had various troubles with dark currents and came up with a half-baked but workable solution. I got back to imaging again recently, and the first go a while back was pretty much wasted by the same kinds of dark issues, with extended or bright sources causing the previous solution to not quite work satisfactorily.

However, it seems to be a problem that is severely aggravated by overly high ISO settings. Dropping the ISO down to 800 for a session on the night of the 3rd and 4th gave pretty much an unqualified success with standard processing steps.

The images are still being calibrated for the entire set, but the first 2/3 or so which were targeted at Messier 27 (the Dumbbell Nebula) have been calibrated and integrated, and I threw together a pretty quick processing. I’ll go back and work on a more careful processing and integration in the next week or so. So here it is:

M27 - the Dumbbell Nebula

A total of 306 exposures of 20s, at F/2 on the 11″. The red Hα emission is way better than in previous attempts, due to the long integration with faster optics (my Nikon is unmodded, meaning the filter inside it tends to block the emission fairly heavily), so overall it’s looking pretty good now.

Dark frame woe

I had a good run of imaging on the 11″ SCT the other night, using the Hyperstar again. A fair amount of time was spent on my friend Hanny’s Voorwerp – as something of a challenge to see if I could capture it. I believe I did get the faintest detection above the noise, but it’s really slight so I’ll forgo posting that image – forgive me Hanny! I’ve kept the data and may acquire some more in the future to have another go.

As mentioned in my first light post, dark frames have been a real issue. Let me explain a bit about calibrating astronomical imaging a bit first.

Digital sensors like CCDs and the CMOS sensors in everyday digital cameras record, fairly obviously, the amount of light they receive. However, even when there is no light coming in they still record a certain signal level. There’s a bias amount on each pixel which is present however short your exposure is, and there’s a dark current which goes up as you expose for longer and is also related to the temperature. Astronomers deal with these by taking separate bias and dark images that are exposures without any light, and subtract these off after. Some digital cameras will often take care of that for you in everyday use by doing the same thing – taking a dark frame the same length as your exposure as soon as you’ve taken your shot – but if you did this during observing you’d waste half your time you could be imaging. So astronomers take these frames usually before or after observing.

They also take flat frames, which are usually made by looking at a uniformly lit surface (a not-dark sky, or a specially made uniformly lit surface, or simply the inside of their telescope’s dome if they have one). This corrects for the amount of light you end up recording at a given pixel compared to how much you should have had, and you apply this by dividing through by your flat image.

If you fail to correctly apply a good dark frame to your images, some pixels will end up brighter than they should be. If you are doing unguided exposures (so your telescope just blindly points where it thinks it should, if it has a motor, rather than having a camera connected to a computer that tries to keep a bright star in the same place during your imaging), these pixels will not fall in the same place when you end up aligning and adding your images together to make a stacked deeper image. The result in a bad case can be something like this:

This is the sort of problem I’d been having with dark frame correction. Despite trying to get frames of the right length and right temperature to match my images, and despite PixInsight (my imaging package of choice) having routines to scale these to match images, I just wasn’t getting rid of all the dark pixels.

After going to bed after having trouble getting things right, I woke up with something of an answer. I stacked all my images together without aligning them, so that the stars moved but the problematic dark current pixels stayed in the same place, and statistically removed the outlying brightest and darkest parts of the stack before averaging. This effectively removes almost all the contribution from the stars (as they’re only in a given place for a small number of the unguided images) and gives a result that is an estimate of the overall sky brightness – mostly light pollution – and the remaining problematic dark current. I could then remove this, using PixInsight’s dark scaling to reduce the noise in the frames, realign, and restack. Result:
Messier 95:
Messier 95

Messier 58:
Messier 58

(with some extra processing applied of course, to aid prettiness, and they aren’t stretched the same as the first image, but if you look closely you can see the background noise in the M95 image and see the difference)

The downside is that if any remaining signal from sources remains in the extra calibration image, you get dark banding around them in the result due to oversubtraction, but that’s a small price to pay for reducing the bright banding from bad darks.

I’d still much prefer to get the dark calibration right first time, but at least it makes the data somewhat repairable. As to why the dark calibration isn’t working – I’m not sure, but I suspect it might be related to my camera’s raw format not being precisely linear, but is very likely to be also related to less than ideal temperature control. A better camera is the answer, but I can’t afford one right now. Another answer many have success with is a dithering pattern between frames to spread out the bright bands in a way that isn’t so visible to the eye, but that’s labour intensive for my setup.

Still more tweaks I plan to make to try to get the best out of the Hyperstar, but those will come on the next clear night I have free.

Mars at Opposition

Mars was at opposition the other night, and luckily the night was clear, so what better time to have a first go at planetary imaging on the 11″ SCT?

I started off with Jupiter – the bluish-grey area is just an artifact from dust on the sensor.
With the 2x Barlow, that puts it at a focal length of about 5.6m (F/20).
The earlier image is this one:
which would be a 4.7m focal length at F/20.

On to Mars:
I believe the left whitish spot is Hellas(?) with clouds on the right a bit under 90 degrees round from it. Correct me if I’m wrong though.

All images done with a Logitech Quickcam 4000 and Registax (and sometimes extra tweaking with PixInsight).

Hoping to get back to some deep sky imaging soon, and work more on the dark current problems that have been plaguing me there.