Are longer subs really better? An experiment in broadband and light pollution.

Are longer subs really better? An experiment in broadband and light pollution.

Show Video

last time on deep Sky detail having an exposure length that overcomes the readed noise is much better than shorter Subs but are 10-minute Subs better than 2minute ones even if the two-minute ones have sufficiently overcome the read noise H that was a good video we should follow it up what did you say oh I was just saying that we need to look at long versus normal exposures probably 10 minute versus 5 minute Subs I agree I wonder what the results would look like well let me check we'll use the patent pending time machine preview DSD viewer we've only got a few seconds holy cow I wasn't expecting the results to be this clear this is really interesting so basically the the 3002 sub rats lost the connection welcome to deep Sky detail I've done two experiments testing how much better 10-minute Subs are than 5 minute Subs but first a few months back I did an experiment where I tested whether 2-minute subframes are better than 15sec subframes if you're new to asop photography then let me give a bit of the background in astrophotography the things we Imaging are really really faint well except for the Stars ideally we should just be able to take a really long exposure to bring out the detail but it's hard to track an object as the Earth rotates as they move across the sky so we put our telescopes on mounts that track the Stars too long of an exposure can be bad because the gears in the mount aren't perfect and you get errors in guiding the Stars also start to get saturated plus there might be light pollution that can wash out a long exposure so instead of one long exposure we take several shorter exposures called subframes or subs and average them together we call this averaging stacking with a perfect Hamera and perfect conditions the average of the shorter subframe should be pretty much identical to one longer subframe but cameras aren't perfect and conditions aren't perfect so the question is what is the ideal sub exposure time should you go as long as you can or should you keep them shorter and that's what my previous video on the subject was about comparing very short 15-second subframes to more normal 120 second subframe the answer in terms of signal to noise ratio is clear the 120 second subframes were much better signal to noise Ratio or SNR to put simply is an important factor for bringing out detail on your image the more SNR the better a stack of 160 15sec subframes which is 40 minutes of total exposure time only had half the SNR as a stack of 20 120 second subframes which is also 40 minutes of total exposure time so the longer exposures were better I I ended that video with a question what about 10-minute Subs so this video will be comparing 5 minute Subs with 10-minute Subs I'll be checking SNR tracking St size and trying to assess something called blotchiness or modeling so what equipment will I be using I've got a 6in Richie creation at a focal length of 1,370 mm attached is my zwo 294 mm Pro camera on an Orion serus Mount I'll be using an off-axis guider the experiment took place in bort 6 or seven Sky meaning there was a good amount of light pollution but it could be worse my first Target is Messier 101 and the second target is the iris nebula I systematically alternated between 10 and 5 minute Subs each night of Imaging this should help rule out systematic differences between the subframes that aren't due to the actual length of the subframe in other words it helps eliminate third variables like weather on the subframe first experiment with messia 101 will only examine the SNR of two types of subframe this experiment is kind of like a pilot study to see if my Mount can actually handle 10-minute Subs it will also help remove some doubt that some of you may be having about SNR as a measurement SNR can be a really good measurement if done correctly it can be a bad measurement if the wrong assumptions are made I did a video on it if the pilot M101 test agrees with the larger Iris nebula test then hopefully you can have more confidence in the result in the second experiment I'll be looking at the iris nebula I'll first look at signal toids ratio of the bright area compared to the darker areas for on that in a minute I'll also examine the size of the Stars tracking and explore blockiness or blotchiness let's talk about the debate for a bit related to SNR does sub exposure length matter in my personal View too short of an exposure can be bad for SNR but after you reach a certain point it doesn't matter but a while ago I saw a presentation by Tim Hutchinson on the astroimaging channel where he said this the real question is how long can I go because remember the longer we expose the greater impact we have on our shot noise for every single frame and then as we stack those frames the better we're going to be when we're trying to drive that noise down away from the signal that we've captured he then said as I said already your camera doesn't have infinite dynamic range so if you have a little teeny tiny signal like this that you try and stretch what's it going to look like well you've taken that little teeny Tino tiny signal of a couple of adus and stretched it over a whole bunch of adus it's going to look blotchy it's that's where you get this modeled kind of Blocky look that you see in some images when I first heard this I was kind of skeptical to be honest the traditional view that I've held is that as long as the signal overcomes the read noise then very long Subs are pointless especially if you have any kind of light pollution if the signal hitting one pixel is something like 100 photons per 120 seconds and for another pixel it's 110 photons per 120 seconds the average of a whole bunch of subframes is going to be about 100 for the first pixel and 110 for the second pixel if you have a 600 second sub then the first pixel will average out to be 500 photons and the second will average out to be 550 as long as the signal overcomes the read noise stretching is done in software so everything should be good but as time went along I had to ask myself could I be wrong so I tried to figure out how I could be wrong one way I could be wrong is maybe the finner parts of the nebula would benefit from longer exposures because they are so faint the signal might be near the read noise if this is true then if I do an SNR measurement along long 10-minute sub should get more SNR for the faint details compared to the 5- minute sub but the SNR should be similar for the medium bright and brightest areas if I grafted it it might look like this or this or this I do doubt this though I think that light pollution will destroy a lot of the faint signal meaning that there will be too much light pollution noise in the fainter areas for it to make a difference if that's the case then the graph would look like this or this or this notice that the difference in the two hypotheses depends on the SNR of the faint areas if I'm right there will be no different if longer Subs do matter there will be a difference in the fainer areas SNR is the measurement of choice here another way I could be wrong is related to what Tim said about amplifying faint signal now Tim is an electrical engineer by training so he probably knows a bit more than me about converting analog signal into a digital output how that conversion would lead to blockiness is a bit beyond my comprehension but started thinking about faint signal where blockiness can and does show up in a lot of astrophotos I've taken and I thought well if the signal is very faint then the range of values the sensor can detect is actually kind of narrow if there are five photons hitting the sensor per subframe on average 99% of the subframes your sensor will detect between 0 and 12 photons but if you expose for 10 times longer on average the sensor will detect 50 photons but the range of probable values will be from 3369 the range of probable values is three times greater with a longer exposures it really seems like you have a bigger range to work with if you expose longer this could allow you for more incremental brightness values when you stretch your image so exposing for longer may actually decrease the blockiness of the image for the fainter areas I don't know if this is correct so in my test I'll check the number of unique brightness values in a [Music] stack so I imaged about 1 hour of messia 101 guiding for the 300 second subs and 600 second Subs was actually really good I threw out an equal amount of time for both types of Subs due to guiding errors One 600 second subframe and two 300 second subframes not bad I will say that the night ey image were pretty clear there was no wind and I had good polar alignment so I wouldn't always expect 600 subframes to perform as well as 300 second subframes especially if there's a lot of wind or your Mount can't handle it I think I was lucky more than anything else I also threw away one more 600 second sub and two 300 second Subs because of high level clouds messing things up also not bad and also I think I got pretty lucky both sets of images were calibrated with the dark flats and dark Flats although I had some pretty bad dust modes but I'm not here to make a pretty picture let's first analyze some star shapes okay so I'm going to do the analysis of the Stars first for the iris nebula and I've got the 300 second sub stack on the left and this uh the stack of 600 second Subs on the right and I've selected the same area and I've just run a dynamic psf in seral with the same settings and for the 300 second Subs we had 44 Stars total and the average full width half maximum is about 5 and 1 half uh you can see that in the X direction is 5.75 and the y direction is 5.04 and the 600 second Subs we also have 400 wor stars with the full with half maximum also being somewhere around 5 1/2 and X direction is 5.72 in the yde direction it's 5.04 or 5.02 or the star sizes are pretty similar meaning that both of these images are kind of equally blurry to test SNR I took the core of M101 and divided it up into 121 equal sections or samples each of the two stacks were aligned in seral I then took a subframe got its SNR for the 121 samples then stacked an image got the new SNR and kept repeating for all the subframes I then repeated it for all the exposure lengths you know took a lot of work you might consider subscribing if you haven't yet I'd appreciate it all right so I've got all of the data extracted for the different parts of the Galaxy for M101 and we're going to go and see if the SNR is different between the 300 second subs and the 600 second sub I'm going to go ahead and run this analysis holy cow I wasn't expecting the results to be this clear this is really interesting basically the the 300 second Subs got more SNR than the 600 second Subs but uh the result wasn't really what we call statistically significant so there's there's probably nothing going on here okay this analysis is a bit more complicated for each type of subframe I had a little over 3 hours of data there were some guiding errors for the 600 second Subs I threw out two frames for the 300 second Subs I threw out four frames so in terms of integration time lost it was again equal can't believe my luck I was also really careful with polar alignment though so maybe it wasn't 100% luck the knights were generally clear that I imaged on and so I threw out one 600 second sub and two 300 second Subs due to the roof of my house interfering with images this time I took 676 samples you'll see why in a few minutes but really I needed more samples this time each of the two stacks were aligned in seral I then took a subframe for one of the exposure types got his SNR for the 676 samples then stacked an image got the new SNR and I repeated it for the subs on the other exposure link you know this took a lot of work you might consider subscribing if you haven't already I'd appreciate it but that's not all then I divided the samples into three groups bright groups medium brightness groups and dim groups bright groups were the brightest 25% of samples dim groups were the bottom 25% mid was the middle 50% let's see what's going on let me run the [Music] analysis before we look at the SNR results let's look at the blockiness question I've run the data for the SNR results and I'm pretty confident in those results like really confident it's the blockiness this question that troubles me and I think I need your help to to figure this out first I'm not sure whether I've got a good analysis for it and second whatever analysis I did come up with might be explained better by something else like shot noise if you have a better way of doing things let me know in the comments I'd appreciate it so for the blockiness question I figured I could just determine the number of unique Adu values for each stack translates to how bright a pixel is how many bits an image determines how many unique Adu values an image can have so a 16bit stacked image can have 65,536 unique Shades of Gray and a 16bit stack there might not be any room for decimal places especially for the fainter parts of the Target that you're Imaging the part of the image that I'm testing of the iris nebula has 11,694 368 total pixels if I'm processing a 16-bit image then there are only 65,000 total brightness values any one pixel can take I'll talk about 32bit Stacks in a second but that means some pixels have to have the same brightness value for the 10minute subs the range of probable values might increase by about 1.4 due to increased variability of the shot noise that might in theory increase the effect of dynamic range I put in theory and quotes and effective D dynamic range and quotes here because I'm not confident these are the correct terms this analysis I think has a lot of moving Parts more on that in a moment so I stacked both both 600 second subs and 300 second Subs as 16bit images first the 300 second Subs had 5,842 unique brightness values the 6002 Subs had 8,432 unique brightness values that's 1.44 times more unique brightness values than the 3002 Subs right in line with a theoretical value of 1.41 so longer Subs

are better well for a 32bit image we would have about 4 billion possible values and when we stack an image we don't have to worry about rounding error as much because we don't lose as much information let me show you what I mean let's talk about why you might expect to see more unique Adu values when you take longer subs for 16bit images so let's pretend that we have two pixels on our camera and they're both recording different parts of the sky right next to each other and we have 300 second Subs and 600 second Subs now pixel one it records these values on 10 different subframes and it has an average of 5 and 1/2 photons that it records over those 10 subframes so two records about six on average so there's like half a photon's worth of difference when you're stacking into a 16bit image and there's not enough room for rounding what do I mean by that well the stars in the image are very bright so they're all going to to be around 60,000 65,000 Adu when you stack those images those stars get average as being bright you only have a limited number of Adu values for the low signal areas and so the stacking software knows this and it says well we have to round these numbers both to six essentially losing some information due to rounding error if you take 600 second exposures instead you might get these values instead of an average of 5 1/2 you get an average of 11 pixel 2 instead of an average of six you get an average of 12 and stacking software says okay those are actually different values we can assign 11 for this pixel and 12 for this pixel we have more room in the in the midtones than we do just in the fous areas and so we can assign different adus to the longer subframe now the thing about a 32-bit image is that instead of just 65,000 unique values you now have like 4 billion you don't have to round anymore there's enough space in the lower areas to where you can actually have this relationship you have 5 and 1 12 Adu for one pixel and six Adu for another pixel and it's fine and so you still have two unique values 32bit stack of 300 second sub and you still have two values for the 32bit 600 second stack of Subs with the 32bit when you stack in 32 bits you're you're not losing that information to rounding error or at least not as much so with a 16bit image the 600 second stack did have more unique brightness values it seems like we're running into the rounding error problem but what if we take our subframes and stack 32-bit image let's look at the number of unique values of each stack now the stack of 300 second Subs had 6787372597 improvement with 679184spooky783201me and you're worried about losing information if your subs aren't long enough then I would stack a 32bit image of course there are a lot of other variables to think about like the bit depth of your monitor principles of human perception the fact that unique values may just be part of the shot noise or other things feel free to suggest another way of analyzing blocking it so let's take a look at the SNR measurements of the iris nubula again the SNR of both Stacks was around 10 as an average of all of the samples the difference was not statistically different just like messio 101 experiment also recall that in this experiment we're going to be looking at the Diest 25% samples compared to the brightest 25 let's go ahead and run that analysis well it looks like for the 25% Diest samples there was no difference in SNR between the 3002 and 600 second stack three the 300 second subs and and 600 second Subs dius Parts got about four SNR the basian statistics really favors the idea that there is no difference quick side note if you're a stats geek with traditional statistics you can only reject the null hypothesis and can't really Provide support for it but with basian Statistics it does give you estimates for how much the null hypothesis is favored in this case the stats say the SNR really is the same for the demonstration but what about even fainter Parts could I have biased the analysis by only looking at the 25% dimmest samples maybe we won't see an effect because those dim parts are still too bright well let's compare the dimmest 10% to the brightest 10% the brightest Parts got about 15 to 16 SNR no difference there but for the Diest part there was also no difference both types of samples were around 2.3 SNR the 300 second Subs di samples had 2.3 SNR and the 600 second Subs also had 2.3 SNR

okay but what if the dimmest 10% of samples was also too bright let's do this again but look at the Diest 5% compared to the brightest 5% note I'm only doing this analysis multiple times because I'm trying to disprove myself my initial hypothesis was that Su length doesn't matter as long as you've overcome the read noise Tim's hypothesis was that it does matter I want to give Tim's hypothesis as much of a chance of it succeeding as possible so when comparing the dius 5% samples the SNR was 1.7 for the 300 second subs and 1.7 for the 600 second Subs we're running out of samples to compare and any dimmer we're getting into territory where the signal is just not discernible anymore from the light pollution in my area it would take hundreds of hours to bring out any real detail for those areas if I were in Darker Skies then it might only take 10 hours but that's why light pollution sucks so let's recap but before I do these analyses took a lot of time and effort I'd like to continue these type of videos please consider supporting me on buyme coffee.com Jeep Sky detail or becoming a channel member membership started only a couple of dollars a month I'm hoping to create an identical dual rig setup to make experiments like these a lot easier to recap this experiment was done with Broadband filters in a moderately light polluted area because of this the results should only be applied to broadband Imaging in moderately or heavy light polluted areas if you have the right conditions a good enough Mount and a good polar alignment and guiding long Subs can be fine you won't have to worry as much about computer storage but star shapes can be similar but you have to worry more about planes satellites High Lev clouds and other things on each subframe in terms of signal to noise ratio there's no benefit of longer subframes as long as you're above the read noise shorter subframes are fine but not too short what about blockiness well this one is tricky I would say that it's possible shorter Subs could lead to more blocky faint details but I think you can mitigate this by just stacking in a 32-bit images even if you're Imaging at 14bit or 16bit subframes you can still stack 32bit to retain all that information to get more Precision in your averages analyses were done under light pollution why is that important well light pollution destroys faint signal so the really interesting faint wh stuff that is truly close to the read noise of the camera gets eliminated because of this these results only apply to broadband Imaging in light polluted Skies so what about darker skies I think there is an extremely credible argument for going as long as you can like Tim Hutchinson claims it's too bad I'm in light polluted Skies it would be nice to have some sort of filter that could mimic Dark Skies W in it something that could get rid of like 90% of the light pollution it seems like a narrow band analysis might be the next step comparing long and short exposures if you enjoyed this video consider watching this one on why light pollution sucks or this one that the algorithm recommends thanks for watching

2025-01-07 08:36

Show Video

Other news

The Ryoko Scam is Back (and Worse?) - Krazy Ken’s Tech Talk 2025-01-15 23:50
Tecnomatix Plant Simulation Tutorial: Free-driving AGV basics 2025-01-12 09:11
Linus's "10 Rules" & How I Use Them In My Own Videos 2025-01-10 07:23