We recently launched our first-of-its kind food safety rating system, and we look forward to rolling it out over the course of the next year. One unique component of our approach to providing residents with easy-to-use signage is our adoption of a rating “curve,” or adjustment of ratings by zip code. This new system has raised questions among some food safety advocates. So why are we doing it and how does it work?
We sat down with our food program manager, Becky Elias, to get a better understanding of why the “curve” is critical to delivering an accurate and fair rating.
(Here’s Becky breaking down the “curve” on Q13 at about the 2:15 mark.)
Are there varying standards for food safety across King County?
Not at all. All restaurants in King County have always been required to meet one food safety standard, plain and simple. If a restaurant is open, it meets minimum standards. If a restaurant does not meet that standard, they are closed and only allowed to re-open when they fix the problem. No exceptions.
Does your basis for closing a restaurant vary by neighborhood?
No. We close restaurants if they don’t meet basic food safety practices. A foodborne illness outbreak, 90 or more red critical violation points, and not having clean or hot water are a few examples of situations that result in closures.
If the restaurant’s permit has been suspended (“closed”) within the last year OR if a restaurant has required three or more return visits in the last year, it’s automatically placed in the “Needs to Improve” category on our window signs. Again, no exceptions.
What does adjusting restaurant ratings by zip code mean (“the curve”)?
This means that the rating system communicates, within each zip code and permit type, the restaurants that practice the best food safety beyond meeting the minimum standard. Adjusting by zip code and using four inspections is the fairest and most accurate way to show how a restaurant is really performing. These two things reflect how well a restaurant performs on inspections over time, and also take into account the slight differences among our inspectors’ inspection style.
We heard from consumers and other stakeholders that they wanted the rating system to convey two specific things. One, how well restaurants perform beyond passing an inspection. And two, how well they do over many inspections. They also shared concerns that the system might inaccurately convey a restaurant is either safer or riskier than it really is, and about what impacts inspector style could have. With this feedback, we set out to design a rating system to meet these needs.
How did you develop the zip code adjustment, and how do you calculate the ratings?
We looked to the evidence for the most accurate rating method. Specifically, we looked at the history of restaurant inspection performance in King County.
Looking back over many years, and then at 2015 in detail, we know that over multiple inspections, 50% of King County restaurants received ZERO red critical violation points. This is good news! These same data show that 40% of King County restaurants received 35 or fewer average points for red critical violations on their last four inspections, and 10% received more than 35 average points for red critical violations on their last four inspections.
This provides our benchmark as we categorize food establishments into the “Okay,” “Good,” or “Excellent” categories. Each year, we will run this same analysis to determine the percentages for these thresholds across the whole county.
Next, we use these percentages to develop uniform cut offs within each zip code. This means we look at the average scores over four inspections of all the restaurants (for the past year) to see what the cut off is for the top 50% of businesses, the next 40% and the next 10%. It is these point cut offs that we use going forward.
When we conduct routine inspections, the newest score and the scores from the previous three inspections get averaged. If the average is above the top cut off, the business will get an “Excellent” rating. If it is below that top cut off, it will get a “Good.” We use this same method to calculate the “Good” and “Okay” categories as well. The process is repeated annually, both county-wide (benchmark setting) and within zip codes.
So you’re saying that the zip code adjustment doesn’t dictate the exact number of restaurants in each category?
Correct! The zip code adjustment sets the cut off for each rating category for each zip code. But, a restaurant will not be arbitrarily placed in a lower or a higher category to meet a quota. If the cut-off for an “Excellent” rating is a zero-point average over four inspections, and 55% of businesses in that zip code have an average of zero points, then all 55% of businesses with that average score will get an “Excellent” rating.
Conversely, If the cut off for “Excellent” is an average of five points, and only 40% of businesses achieve this average over four inspections, only 40% of businesses in that zip code will get the “Excellent” rating.
Could a “Needs Improvement” restaurant in one zip code be “Excellent” or “Good” in another zip code?
No. A restaurant that isn’t safe to eat at would be closed, no matter where it’s located. A “Needs Improvement” restaurant would also be categorized as such regardless of zip code. Remember, this category isn’t part of the curve.
What would happen if we didn’t rate “on a curve” by zip code?
If we didn’t rate on a curve, inspector variability would make the results less fair and accurate, which isn’t good for consumers or restaurants.
Even though our food inspectors are extensively trained, some variation exists in how inspectors cite violations in an inspection. We know that food inspectors observing identical conditions disagree 60% of the time. One zip code may appear to be a poorly performing zip code, but when inspectors with two different grading styles switch areas (about every three years), that poorly performing zip code may perform better on inspections and the zip code where that inspector now works tends to fare worse in terms of inspection scores.
Restaurants in a zip code with a tougher-than-average inspector would look like they were performing worse on their inspections compared to the rest of the county. Rating on a curve adjusts for inspector variability, making the rating system fairer for all restaurants.
Could inspector variability be solved by more staff training?
Solved? No. Improved? Yes. Public Health’s food inspectors are food safety experts that go through extensive training to be able to assess food safety risk across all types of food service – from a small coffee shop to a large banquet hall. Our food inspectors receive training by the FDA, the State Department of Health, and ongoing training internally. They focus on skills such as: food safety and the food code, plumbing, HVAC, teaching skills, motivational behavior change, cross cultural skills, and they have a deep understanding of cooking and the restaurant business.
In addition to training on these skills, in 2015 we started doing peer-reviewed inspections as part of staff training. Inspectors perform an inspection together, talk through their observations and learn from each other. We have successfully improved variability by doing monthly peer reviewed inspections.
Even with all of this training and peer reviewed inspections, subtle differences between inspectors will always exist. This slight variation in how inspectors cite violations during an inspection can have a major impact on a restaurant’s food safety rating. Since we know some degree of variation exists and we have research to show it, we want to account for it in our rating system by rating on a curve by zip code.
Inspector variability is an issue for rating systems across the country, but many jurisdictions do not talk about it. We are being transparent by acknowledging and accounting for it while also continuing to provide extensive training to our staff.
What is the roll-out plan?
The new Food Safety Rating System is rolling out in four phases so that it can be evaluated throughout the year. Our process included lots of community and stakeholder feedback, and we always welcome more. Please feel free to provide feedback or ask questions by contacting firstname.lastname@example.org.
Originally posted on January 22, 2017
3 thoughts on “Food safety rating on a curve: How it’s done and why it matters”
Couldn’t inspector variability be addressed directly by using a percentile ‘curve’ per /inspector/, not per zip code? Addressing inspectors makes sense now that you say it, but I read through this and rnever understood, what other factors besides inspector are you aiming to capture with zip code?
If you’re trying to answer people’s questions, you know people are asking the question of justice: why is it fair for people in one zip code to find restaurants of lower performance for an Excellent rating, than people in another zip code? Can you address that directly?
Thanks for the questions!
Because inspectors are generally assigned to specific zip codes, the curve we’ve set accomplishes the same goal. In addition to inspector variability, we also want to make sure that restaurants in similar “risk groups” are compared against each other. It would be unfair to rate a full-service kitchen against a coffee shop that serves only pre-packaged goods, for instance.
To your last question, the curve is actually providing the justice you describe. If a restaurant is receiving higher (worse) scores, the curve will help us understand whether that score is a “true” high score or if it is the result of an especially tough inspector. Because we are setting the curve at the zip code level (essentially, an inspector’s working area) and basing it on four inspections over time, we can feel confident that any restaurant that receives an “excellent” rating is comparable to any other restaurant in the county in the same category.
I does not make any sense. The “curve” is for inadequate inspectors. Please just do inspection as mornmal without
doing by zip codes.
Comments are closed.