Category Archives: Forecast Tools

Lessons in Snow Level Forecasting

It's worth taking a look at temperatures this morning as the overnight southwesterly flow has brought in mild air.

At about 1440 UTC (0740 MST) temperatures at Snowbasin were around 35ºF at the base, 35ºF at the Middle Bowl observing site, and 29ºF at the top of Mt. Ogden.  This puts the freezing level, the level at which temperatures are 32ºF/0ºC at about 9000 feet. 

Source: MesoWest
In the central Wasatch, temperatures are above freezing at the base of all the ski areas.  It's currently 38ºF in town at Park City, 36ºF at the base of Alta, and 36ºF at the bases of Solitude and Brighton.  Here, the freezing level sits at right around 9000-9500 feet depending on the local topography. 

The snow level is typically lower than the freezing level for a number of reasons.  As snow falls and begins to melt, it extracts heat from the atmosphere.  This often results in a layer of constant temperature that is near 0ºC.  The evaporation of snow also can cool the atmosphere some or slow the melting of snow.  Eventually, the snow turns into a mixture of snow and wet snow (or slush) and eventually wet snow and rain, before becoming all rain.  The layer in which this occurs is called the transition zone

Because of the effects of melting and evaporation, the snow level (and freezing level) can yo-you depending on precipitation rate, lowering when precipitation rates are high, and there is greater cooling due to evaporation and melting, and rising when precipitation rates are low, and there is less cooling due to evaporation and melting.  This may be noticeable if you elect to don the garbage-bag look and are skiing today. 

To estimate snow levels, meteorologists use soundings extracted from numerical forecast models.  In addition to temperature, humidity (or dewpoint) are also used, along with estimates of precipitation intensity.  Typically, one uses the temperature and humidity profiles to estimate the height of what is known as the wet-bulb-zero level, the height at which the wet-bulb temperature is 0ºC.  The wet-bulb temperature is the temperature of the air if it were cooled by evaporation to saturation and helps to account for some of the cooling effects noted above.  In Utah, one often lowers this level by 1000 feet for an estimate of the snow level, although there are times when one might fudge it by less or more.  For example, in very high precipitation rates, the one might lower it more.

We extract the hight of the wet-bulb-zero level from the NAM and provide them from the latest forecast at  Below is an example of the tabular output from last night 0600 UTC initialized NAM forecast.  The wet-bulb-zero level was forecast to fluctuate between 8700 and 9000 feet through 8 AM this morning.  It remains between 8700 and 9300 feet through 5 PM this afternoon.  Thus, expect snow levels to be near or around 8000 feet today. 

Lowering of the wet-bulb zero begins slowly after about 10 PM tonight, with a more sudden drop very early Wednesday morning with the approach of the cold front.

Also available in that table are temperatures and winds for Mt. Baldy, estimates of snow-to-liquid water ratio/water content based on an algorithm developed by Trevor Alcott and myself, water equivalents produced by the NAM, and estimates of snowfall based on the water equivalent and the snow ratio estimates.  Handy, if you keep in mind it is just guidance from the NAM model.  Through 7 AM this morning, this NAM run produced 0.35" of water and 2.5" of snow at Alta-Collins, which compares fairly well to the 0.34" and 4" observed.  I can assure you that most forecasts aren't that good!

Accessing ECMWF Forecasts

The European Center for Medium Range Weather Forecasting (ECMWF) produces the leading medium range forecast model and ensemble prediction system in the world.  However, their products are generally paywalled and I am unable to access them freely for real-time applications (Note: They do generously provide their products for research applications that are not real time, which we greatly appreciated). 

As a result, weather weenies and hobbyists who want to examine the ECMWF forecasts have typically had to pay for a subscription.  However, is now providing access to ECMWF forecast graphics without such a subscription.  Below, for examples, is the 5 day forecast valid 000 UTC Friday 22 December. 

I don't know how long this jailbreak will last, so enjoy while you can. 

The Folly of Betting on the GFS and "DModel/Dt"

Just a quick post today following up on some themes from posts the past few days.

The graphic below lops through GFS 228, 204, 180, 156, 132, 108, and 84 hour forecasts valid at 0000 UTC 4 December (5 PM MST Sunday).  Imagine trying to forecast for Sunday afternoon based on just this single forecast system.  Good luck.

The loop also illustrates the folly of forecasting based on model trends, or what forecasters call "DModel/Dt" (i.e., the rate of change of the model forecast with respect to time).  Clearly, there is no evidence of a consistent trend in those forecasts.  The pattern is too chaotic, leading to a lack of run-to-run consistency, even in the more recent forecasts.

It's fun to talk about DModel/Dt, but studies have found that it has little forecast value for medium-range forecasts.  Hamill (2003) examined the value of DModel/Dt and here's what they found:
"Extrapolation of forecast trends was shown to have little forecast value. Also, there was only a small amount of information on forecast accuracy from the amount of discrepancy between short-term lagged forecasts. The lack of validity of this rule of thumb suggest that others should also be carefully scrutinized before use.
Let's put this rule of thumb to bed.

The Deception of a Single GFS Forecast

Having cut my teeth in an era before ensembles, I confess some tendency to pull up the medium range forecasts produced by the Global Forecast System (GFS) when I arrive at the office during times of drought to scan for the next hope for flakes.

This is a colossal mistake, especially in the current pattern.  

The predictability at 4-7 days seems to have been remarkably low the past few weeks, leading to individual GFS forecasts that might be described as "all over the place," although one might argue that in general they have trended to drier as the forecast lead-time decreases.

Here's an example of the types of changes that one sees.  The 162 hour forecast from the GFS initialized at 0600 UTC 27 January November shows a major storm for all of the mountains of Utah on Sunday afternoon.  Great hope right?

Two days later, the 114 hour forecast from the run initialized at 0600 UTC last night says NO SNOW FOR YOU!

Let's consult an ensemble forecast.  We had a problem with our 0000 UTC NAEFS processing, so I'll use the plume forecast for Alta-Collins from yesterday.  Look at that spread.  There are some members generating over 3" of water and 30" of snow and others absolutely nothing.

I personally like plume diagrams, but the members producing the heaviest precipitation are the ones that attract the eye's attention and cause the heart to race.  Close inspection of that diagram shows that most members (about 40%) are producing less than 0.5" of water.  Thus, if one never bothered looking at the GFS, but instead the ensembles, the possibility that we're not going to see much is there.  On the other hand, the possibility of a larger storm is small, but not zero.

Humans like single model runs.  They are easy to look at and interpret, and they produce plausible forecasts.

However, for medium-range forecasts, they can be deceptive.  They do not provide estimates of the range of possibilities, and in a pattern like this, that's a problem.

I go to bed at night in a pattern like this expecting the worst (dust on dirt) and hoping for the best (major dumpage.

Giving Thanks for the NCAR Ensemble

For nearly three years, the National Center for Atmospheric Research (NCAR) has produced a daily, 10-member, cloud permitting ensemble at 3-km grid spacing known as the "NCAR Ensemble". 

For those of us in the western U.S., the NCAR Ensemble forecasts, available from web sites hosted by NCAR and the University of Utah, attempted to do something that no current operational forecast system could three years ago — capture the extreme spatial contrasts and quantify the inherent uncertainty of precipitation over the western United States. 

Last night's forecast, for example, shows the major deluge expected to affect the Pacific Northwest through Thanksgiving.  At 3-km grid spacing, the NCAR ensemble accounts for many regional and local topographic influences and, with 10-members, one can derive statistics related to the range of possible forecast outcomes and the likelihood of precipitation above certain thresholds (our standard 1" and 2" thresholds work well for the Wasatch, but not the Cascades!). 
Plume diagrams allow one to examine precipitation at various locations, including Mt. Baker Ski Area below.  Such a pity that nearly all of that water will fall in the form of rain.

These products are popular with readers of this blog, friends in the snow-safety community, and operational forecasters.

Recently, NCAR announced that the NCAR Ensemble will sunset at the end of the calendar year.  More information is below. 

Although I'm sad to see it go, I believe this move makes sense.  NCAR is a research lab, not an operational center.  They need to be unshackled from routine forecasting and free to explore creative ideas and pursue modeling breakthroughs.  The NCAR Ensemble did this for three years.  It has allowed us to learn a great deal about cloud-permitting ensembles.  For example, we have a paper examining the performance of the NCAR Ensemble that may be the subject of a future post.   

Given that tomorrow is Thanksgiving, it seems fitting to toast the NCAR Ensemble team that includes Kate Fossell, Glen Romine, Craig Schwartz, and Ryan Sobash.  Thanks so much! We look forward to a few more weeks of NCAR Ensemble forecasts, and hope that Mother Nature shifts this damn pattern so that we can actually use them for powder hunting in Utah!

Precipitation Overprediction Problems with the NAM Conus Nest

High resolution forecast models are not necessarily better forecast models and precipitation forecasts produced by the NAM Conus Nest (hereafter the NAM-3km) are a prime example of this. 

The NAM-3km covers the continental US at a grid spacing of 3 km, four times the resolution of the 12-km NAM in which it is nested.  With such high resolution, you would think the NAM-3km would be especially useful for precipitation forecasting over the complex terrain of the western US, but it isn't, because it has a major overprediction problem.

Tom Gowan, a graduate student in my research group, recently led a study examining the performance of several forecast models at mountain locations across the western U.S. during last winter.  I have been holding off on publicly sharing these results broadly since the paper describing this work is still in review, but the results are too pertinent to forecast needs right now not to share at this juncture.  In the case of the NAM-3km, we used pre-operational test runs from last winter that were kindly provided by NCEP.

The plot below shows the ratio of mean-daily precipitation produced by the NAM-3km to that observed at SNOTEL stations.  Overprediction is evident at the majority of sites, with on average the NAM-3km producing 1.3 times as much precipitation as observed.
Source: Gowan et al., in review.
A major reason for this is that the NAM-3km produces far more major precipitation events than observed, especially over the interior western US.  In the plot below, the frequency bias is the ratio of the number of forecast events to the number of observed events in each event size bin.  The NAM-3km has by far the largest overprediction problem.  Note that the NCAR ensemble also produces too many large events, although the magnitude of the problem is not as acute.  

Source: Gowan et al. in review.
I bring up these issues today because the NAM-3km is going batsh-t crazy for the storm later today and tonight.  For Alta-Collins, the 12-km NAM is producing .08" of precipitation through 10 AM tomorrow.  In contrast, the 3-km NAM is producing 2.04"! 

The loop below shows steady, drippy precipitation over the Wasatch and nearby ranges during the overnight period. 

Now, it is always dangerous to say a model is wrong before the forecast verifies, but I'm going to say it anyway.  This forecast is wrong.  There's little evidence to support such huge precipitation totals.  Even in the NCAR ensemble, 7 of the 10 members produce less than 0.2" of precipitation, and the wettest goes for about .57".  

This issue plagued the NAM Conus Nest when it was run at 4-km grid spacing and it appears to carry over to the higher resolution upgrade. 

The bottom line is this.  If you want great deep powder skiing, consider using the NAM-3km for your holodeck experience.  However, if you live in the real world, avoid using the NAM-3km precipitation forecasts unless you want to be severely disappointed on a regular basis.  

Under the Hood of Numerical Weather Prediction Models

Numerical Weather Prediction models, or computer models for short, form the backbone of modern weather forecasting.  They assimilate a diverse mix of observations collected all around the globe by satellites, weather stations, aircraft, radars, and other instruments, form a coherent analysis of the atmosphere, and integrate forward to produce a forecast. 

How the hell do they do it?  We'll focus here on two aspects of computer modeling, discretization and parameterization


Today's computer models begin with a set of partial differential equations, known as the primitive equations, that describe fluid motion and are based on conservation of mass, conservation of momentum, conservation of energy and the ideal gas law. The primitive equations can be used to describe the flow and behavior of fluids in a variety of contexts, including ocean circulations.

Unfortunately, computers can't solve these types of equations directly.  The primitive equations must be converted into algebraic equations to be solved on a computer.  There are a variety of steps and techniques for doing this, which is known as discretization. 

Discretization requires approximations and compromises.  One aspect of the discretization process that most weather hobbyists are aware of is resolution.  For grid cell models, this is commonly expressed as a distance.  For example, the NAM has a horizontal grid spacing of about 12-km.  Often, these grid cells are conceptualized as rectangles, but other shapes can be used.  For example, the Model for Prediction Across Scales (MPAS), being developed by the National Center for Atmospheric Research uses hexagons, with the occasional pentagon or septagon, to cover the sphere. 

Not all models are based on this grid cell concept.  For example, the GFS is a spectral model which, instead of using grid cells, represents the atmosphere as a combination of waves with differing wavelengths.  

The National Weather Service is currently developing a new modeling system that will replace the GFS, known as the FV3.  It is based on a third approach, known as finite volume (the "FV" in FV3).  More info at


The primitive equations describe fluid motion, but a computer model also needs what meteorologists call "physics" — for example, radiation, clouds, precipitation, and interactions with the land surface.  This is really where the rubber hits the road, as in general we can't simulate these things directly.  For example, we can't simulate the evolution of every cloud drop in a cloud.  There's simply too many of them.  As a result, we need to take shortcuts and make simplifications, known as parameterizations. 

In the case of the cloud, since we can't simulate every cloud drop, we might instead simulate how much total cloud water is in each grid box, and then specify how that water is spread across small, medium, and large droplets.  Similar shortcuts are made in how we deal with other physical processes. 

In some cases, we have a good understanding of the physical processes and the parameterizations, while a shortcut, are quite accurate.  In others, understanding is weaker (in some cases much weaker), and we are using educated guesses and tuning to get something that looks reasonable. 

All Models Are Wrong, But Some Are Useful

That famous George Box quote is a good one to keep in mind when using numerical weather prediction models.  Really, modern day numerical weather prediction is a remarkable scientific achievement, and the forecasts are getting better every day.   There's every reason to expect them to improve in the future, as knowledge and techniques advance, provided we continue to invest in our global and regional observing systems.  A perfect model will give you a lousy forecast if it doesn't start with a good analysis of the atmosphere, as well as land, sea, ice, and lake conditions.  But that's a subject for another post.  

Perplexing Probabilities

There are a host of challenges posed in the forecast communications business.  One that I thought of this morning as I surveyed the ensemble forecasts is the low probability, high impact weather event.

Fair weather looks to predominate over northern through Thursday, but on Friday, an upper-level trough moves across the northwest U.S. with the trailing cold front racing across Utah.  The NAM calls for precipitation accompanying the front to be relatively light.  Perhaps some valley rain showers and mountain snow showers, but nothing for skiers to get excited about.  

If we look at our downscaled forecasts for Alta based on the Short Range Ensemble Forecast System (SREF) we find that most members are producing very light accumulations of 0.25" of water equivalent or less through 6 PM Friday (0000 UTC 21 October).  Again, nothing to get excited about.  However, 2 of the 26 members are going bigger and putting out about 0.7" of water or so.  
If we look at our downscaled NAEFS forecasts for Alta, most members producing light accumulations, a few in the 0.4" to 0.7" range, but then two outliers that go absolutely huge, generating about 2.5 inches of water and around 25 inches of snow.  Skiing anyone?

Such outliers are unusual, but not unheard.  However, I don't know of any studies that have attempted to look specifically at the reliability of such low probability, high impact forecasts.  The NAEFS forecast above, if taken literally, would yield about a 10% chance of 20" of snow or more on Friday, but a 90% chance of 7 inches or less.  Is that a reasonable forecast of the probabilities?  In addition, if that was a reasonable forecast of the possible outcomes, how best to communicate that to the public and forecast customers?  "Well, we think that there will be some snow showers.  Odds are it won't add up to much, but there's a slight chance of 20."  That should go over well.

I don't have answers for these questions.  We need better validation studies of our ensembles and, as ensembles improve, better ways to both extract and communicate probabilistic forecast information in a way that is useful to the end user.  

NCAR Ensemble Forecast Post Mortem

Weather, like politics, is local, and one often gets much different perspectives on the weather and the quality of a weather forecast depending on location.

I thought I'd go back and take a look at the NCAR ensemble forecasts for the 48 hour period ending 0000 UTC (5 PM MST) yesterday afternoon.  For simplicity, I just looked at the total precipitation.  The forecast for Powder Mountain, which is still closed due to avalanche hazard, was actually quite good, falling in the middle of the distribution.

In contrast, the forecast for Alta-Collins was quite poor.  The lowest ensemble member called for over 1.0" of water equivalent, whereas 0.67" was observed.

In general, we'd like to see forecasts fall within the ensemble spread, as occurred at Powder Mountain, rather than outside the ensemble spread, as occurred at Alta.  Most ensemble forecast systems are underdipersive, meaning that they do not produce a wide enough range of outcomes, leading to events falling above or below the ensemble spread.  We have been evaluating the NCAR ensemble and it is actually much better than the global operational ensembles in this regard.  For example, there are far fewer precipitation events that fall above or below the NCAR ensemble spread than the GEFS or even the coveted ECMWF ensemble.  

Courtesy Tom Gowan, University of Utah
The results above are for undownscaled model guidance, so part of the problem with the ECMWF ensemble and GEFS reflects the poor terrain representation of those models.  On the other hand, we've done some evaluation of downscaled output from the GEFS and for short ranges, it still doesn't do as well as the NCAR ensemble.

The bottom line is that the NCAR ensemble remains an extremely valuable tool in the precipitation forecast toolbox.  The occasional miss is simply evidence that despite improving forecasts, we still have work to do.   


Although Utah had strong mountain winds yesterday, it largely escaped any serious damage.  Jackson, Wyoming was not so lucky.  Here's a screen grab from today's JHMR Mountain Report.

Source: Jackson Hole Mountain Resort

Final Upgrade to the NAM

One final upgrade is planned for the current version of the North American Mesoscale (NAM) Forecast System
The North American Mesoscale (NAM) Forecast System is scheduled for its final upgrade on February 1st.  Just in time for Groundhog Day, this upgrade to NAM "version 4" includes a number of major changes including:
  • Changes in the grid spacing of the CONUS NAM nest from 4 to 3 km, Alaska nest from 6 to 3 km, and CONUS fire-weather nest from 1.333 to 1.5 km.  Yes, the latter is a decrease in resolution, but probably not significant.
  • More frequent calls of some model physics packages to every 2nd time step and more frequent radiation calls for the NAM 12-km domain
  • Specific humidity advection now done every time step (shockingly, this wasn't already being done)
  • Changes to the model convective parameterization in the 12-km domain
  • Updated cloud microphysics
  • Land-surface model improvements
  • Completely updated data assimilation system
  • Use of a new climatology of fresh water lake temperature for inland water bodies not resolved by the current 1/12th degree analysis
  • Reduced terrain smoothing in NAM nests
Gory details available from Technical Implementation Notice 16-41, available here, and the poster below.  

I think it is likely that these changes will result in some improvements in the skill of the 12-km NAM domain.  That domain, however, has been a pretty reliable performer for event water-equivalent forecasts over the central Wasatch (despite its low resolution), so I'm hoping that the biases don't change much.  My conversations with the NAM developers this past week suggest they think that the changes will also help some of the overforecast problems that plague the NAM nest, but they really haven't done a careful analysis over the west.  Instead, their inferences are based on examining non-orographic precipitation events east of the Rockies.  Time will tell.

It is anticipated that this will be the final upgrade to the NAM and that eventually a new cloud-allowing modeling system will be developed using the Geophysical Fluid Dynamics Laboratory (GFDL) Finite-Volume Cubed-Sphere Dynamical Core (FV3), which was recently chosen for NOAA's next-generation global environmental modeling system and replacement for the GFS.