Response Time Analysis
Do EMS response times vary throughout the region?
Model Results
For more details on the specification of our model, see the Data & Methods tab.
Measures of Model Fit
We assessed the specification of our model using a posterior predictive check, which is common in Bayesian estimation. The posterior predictive check compares the distribution of predicted response times for each MCMC draw, where each draw represents one guess of what the model coefficients could be, to the actual distribution of response times in the data. Below, we have the posterior predictive check results for the model including neighborhood effects. Light blue lines represent the distributions of response times predicted by the model, while dark blue lines represent the actual response times. The predictions follow the model closely, giving strong support to the strength of this model.
In our case, the posterior predictive check did not immediately favor one model over the other. To compare our two models, we can check against information criterion. These summaries indicate a model’s ability to predict data not included in the sample the model was trained on. Strong performance on this check suggests that a model represents the data generation process well.
We assessed our models using the leave-one-out information criterion, which measures how well a model predicts each data point if that data point were removed from the data the model was trained on. In the following table, the model with a 0 indicates the best fit. Large negative numbers indicate that model performed much worse than the best model. Thus, while both models produced a reasonable distribution of predicted response times, the multilevel model was superior in predicting individual response times based on the specified predictors.
Model | Expected Log Predictive Density Difference | Standard Error |
---|---|---|
Neighborhood Level Model | 0.00 | 0.0000 |
Linear Model | -6909.09 | 149.1901 |
Spatial Autocorrelation
Because our data varies over space, we must attempt to account for spatial autocorrelation, or the general tendency for data points near each other to have similar values. Many models assume that no spatial autocorrelation exists, so its presence can produce misleading results. One way to assess this is to examine the model’s prediction accuracy across space. If we see that nearby areas have similar prediction errors, we have reason to be concerned about spatial autocorrelation. We therefore plotted the residuals from the neighborhood-level model across the EMS response region.
Visual inspection suggests some minor clustering in our model errors. An empirical measure for the magnitude of this effect is the local Moran’s I statistic. Large or small values of this statistic suggest the presence of autocorrelation in that area.
Since few areas have extreme values for the local Moran’s I statistic, we did not feel it necessary to generate a more complex model to address spatial autocorrelation. Our initial linear model suffered much more from spatial autocorrelation, which was our primary reason for abandoning it in favor of the neighborhood-level model. Evidently, the inclusion of neighborhoods was sufficient in accounting for much of the spatial autocorrelation present in the data.
Model Coefficients
Having selected the neighborhood level model as the appropriate model, we move on to interpretations of the resulting coefficients. The following plot displays the effects of each variable included in the model when compared to a reference patient (in this case, the reference is a black woman transported by ambulance with abuse of substance symptoms before COVID-19). The reference response time for this case is 9.9 minutes. As an example of coefficient interpretation, if the same woman instead had cardiovascular symptoms, but all other variables remained the same, we would expect an EMS unit to arrive in about 8.4 minutes, since the coefficient associated with cardiovascular symptoms is approximately 0.84.
From this, we can gather that there is no significant difference in how long service takes across different races and genders. While this is not the only area of EMS services where equity issues can occur, it does not appear that there is systematic inequity in response times.
However, we do see large differences in the response times for different types of symptoms. Some of these make sense, as a cardiovascular event is likely considered more time-sensitive than an alcohol or drug-related incident. We do see some variation in the response times to some symptoms across the beginning of the COVID-19 pandemic. Based on the plot above, we can be fairly confident that GI/GU cases take longer to serve during the COVID-19 era than before. Unfortunately, due to the relatively few incidents after March 15th, our estimates are not highly precise. For most of these symptoms, we cannot say for sure whether response times have increased or decreased. These results still serve to point out possible trends worth monitoring as the pandemic continues.
Because this is a neighborhood-level model, the expected change in response time due to being in a particular neighborhood is also interpretable (the interpretation of units is the same as explained above). When plotting response times across the region, we see that rural areas are expected to wait longer for emergency medical services than their urban counterparts. However, these results should be interpreted with caution because of our inability directly control for distance traveled. As neighborhood or census tract regions get larger, this deficiency in our model becomes more and more meaningful. Therefore, comparing smaller regions within Charlottesville to those in Albemarle is suspect. Still, differences within similarly-sized regions may be meaningful.
Conclusions
While this analysis suffers from the lack of data to control for travel distance, it still tells us important information about the Charlottesville and Albemarle County emergency medical services. First, there do not appear to be any glaring disparities in response times across demographic groups.
Second, some types of symptoms are expected to be served much faster than others. For example, holding all other variables constant, we expect a cardiovascular incident to be responded to about 15% more quickly than a drug or alcohol related incident. With this information, EMS teams may be able to identify inconsistencies between response times and highly time-sensitive incident types, potentially increasing efficiency. Below we show our predicted range of response times for a cardiovascular incident and a drug or alcohol related incident.
Finally, COVID-19 has made calls take longer on average, although this increase is very small (on the order of 10 to 20 seconds) and is likely not medically significant. This change may be driven by the increased time required for EMS responders to adhere to stricter personal protective equipment guidelines. This hypothesis could be assessed with the ability to split response times into separate loadout time and travel time components.
Next Steps
The most obvious improvement to this analysis would be the inclusion of travel distance from dispatch to incident location. Even with our attempts to account for it, this variable is likely impacting many of our estimates to some degree, and it would not be surprising to see results shift were we to include it in the model. Unfortunately, this data is not reliably collected, and while alternative approaches to estimate it certainly exist, implementing them ended up being outside the time frame of this summer project.
Including travel distance information would allow us to make more meaningful interpretation of neighborhood effects. As it stands, we assume that the neighborhood effects are capturing much of the travel distance information. It would be interesting to see if some neighborhoods are consistently under- or over-served. Of particular interest is the Pantops region, which has a large number of elderly residents at heightened risk of COVID-19. On the previous graphs, this region corresponds to census tract 105, just to the East of Charlottesville. Any changes in response times here would therefore be particularly consequential.
Finally, it is worth noting that response times are not the only way to assess medical service delivery, and expanding our analysis to other variables would help provide a more nuanced view of changes across the COVID-19 pandemic. For instance, drops in call volume (as discussed in the background section of the site), may indicate certain communities who have become highly reluctant to rely on emergency services with potentially dangerous consequences. Other variables, like hospitals patients are transferred to, types of medical procedures implemented during calls, and more would also be relevant for further exploration.