This will probably be my last post on this topic (for this year anyway) and I just thought it would be worth having a look at some of the difficulties at identifying the best application techniques, which might have led to such differing views on what makes the best application technique for pre-emergence herbicides.
The first thing to say is that no application trial is easy – there are many confounding factors that can mess it up.
Firstly, you need big plots if you are going to use full-scale realistic equipment and big gaps between them, compared with trials that use hand-held sprayers.
The problem with needing a large area to conduct the trial is getting a uniform distribution of weeds – the bigger the area, the less likely this is because most weeds are intrinsically patchy. And if you can’t see them before you do the trial, you are flying blind. Then because you can’t see the weeds at the start of the trial, you can’t have a ‘before’ and ‘after’ weed count to calculate control, you have to have separate ‘control’ plots to compare with, and they might have a completely different weed count from your test plots. If I were a betting person (which I’m not) I’d put money on being able to set up an application trial with half a dozen treatments and a good level of replication, do exactly the same thing for every treatment, and still find differences – this will just be due to the variability in the underlying weed population.
We often deal with high variability by greater replication (i.e. more plots), but this would need a bigger area, increasing the variability still further, and possibly making the problem no better. I do not have the statistical skills to deal with this, but I do know that statistics are important, and without them there is a big danger of jumping to false conclusions.
Because application trials are difficult (and not cheap) there is a tendency to want to throw in as many treatments as possible. Again, this needs a bigger area, and the variability increases, but also reduces the statistical power – which reduces the chances of finding real effects. I always think the best trials have a very clear objective and as few treatments as possible.
Next, you need to be on the right part of the dose-response curve. If you are too near the bottom, nothing works and you won’t see differences. If you are too near the top, everything works and you won’t see differences. Somewhere in the middle is best, but if we don’t know where that is, we might have to try different doses, which means more treatments, more area, more variability….
An alternative approach is to artificially create a uniform weed population by sowing weed seeds. Difficult when it’s blackgrass, but we have done it in the past with ryegrass. Or at SSAU we can grow trays of weeds and spray them with our track sprayer which can operate at speeds up to 14 km/h with real nozzle configurations. This effectively eliminates variability in weed populations and significantly reduces environmental variability.
I think the most important thing is to have a good hypothesis – sounds a bit science-nerdy, but it always is the basis of the best trials. Let’s say you believe that a particular product works in a particular way, and enhancing a specific aspect of the application will result in better control. An example is that using bigger droplets puts less on plants, therefore bigger droplets should give poorer control for foliage-applied products. You still might not see this in the field, because of the variability, but if you see the opposite, you don’t automatically think ‘oh, big droplets must be best’, you think ‘hummm – back to the drawing board, how can I do a better experiment?’ If you haven’t got a hypothesis AND you throw in lots of treatments, whatever comes out best is assumed to be best when actually it was probably just chance. Particularly if your statistical analysis isn’t up to the job.
All of this is true for all application experiments, but another issue for pre-emergence applications is that it might be that application is either a lot less important than the soil itself, or application interacts with soil parameters, so that each trial result relates only to the particular soil conditions and cannot be extrapolated any wider without a much better understanding of what is going on.
So I think the conflicting advice comes from a combination of not a good enough understanding of the processes involved, which leads to field trials that are not quite up to the job they are trying to do, plus everyone trying to keep their data to themselves for commercial advantage, so that we can’t even get the best out of pooling all the information. And of course, no public funding for this kind of work. I blame the government. Whoever they may be at the moment.