In an interactive R session, while a data scientist is exploring the data, errors and warnings are harmless in the sense that the data scientist can react to them and take the appropriate corrective actions.

For R code in a production environment, which is executed without supervision, the story is different. The problems one has to deal with fall into the following categories:

* warnings
* bad input data
* foreseeable errors
* unexpected errors

 

Expecting errors and keeping the code clean

To not clutter the code with the error handling, the outermost function essentially consist only of a `tryCatch` or a `withCallingHandlers()` statement, which executes the actual code and calls custom condition handles in case a condition-class object is called during execution.

In such a case, control is transferred to the corresponding handler in the `tryCatch`. Hence apart from an occasional call to a condition-class, such as `stop()`, the code with the actual business logic is free from error handling. It is the custom handlers and the error classes that would write a log, inform the operations team via email or ensure consistent output to the calling framework. Another advantage is, that this logic can be exported, imported and developed independent of the use case.

The distinction between expected and unexpected errors mostly affects operations. For expected errors the R code can take the appropriate actions like retrying to obtain a lost connection, refreshing outdated cache data or informing operations about a persisting problem.

Unexpected errors, by definition, should have never happened and usually represent a coding bug or a misconfiguration. Operations should be informed about them as soon as possible to evaluate its seriousness and take the adequate steps.

To abstract that distinction in the code, it can be helpful to create a custom condition-class S3 object, which can return arbitrary output. That in turn can be helpful to expand the return structure of an error with additional information, such as an error code id. If there is interest, an example for such an object can be given in another blog post, but for now a good starting point to learn about this can be found in the ‚Exception handling‘ chapter of Hadley Wickham’s book ‚Advanced R‘, freely available (here).

Corresponding to those handlers, we currently use custom conditions `inputError` and `expectedError`, inherited from the S3 `error` condition class, to manually define our expected exception cases. A typical example to signal an input error condition would look like this:

if(inputDataIsBad)
{
inputError(„Some descriptive error message.“)
}

This can be used just like `stop()` would be. Since we use custom conditions, we can make these exit to different handlers in the outer `tryCatch()` function. From that it follows that we never actually use `stop()` anymore ourselves, but functions from imported packages may still do so. Such an external stop-call is then handled as an unexpected error, and appropriate action can be taken.

 

Warnings

Invalid, inconsistent, incomplete or noncompliant input data run directly in R can still produce a partial result, but might throw a warning along the way. However, the possibility that the produced result contains errors is usually not an acceptable risk in a production setting.

We have adopted a policy to write R code in such a way, that warnings will not occur in the production environment. Hence, if in production a warning arises, this is always unexpected and should be treated as an error. This is achieved by setting `options(warn = 2)`, which converts all warnings into errors (*unexpected* errors, to be precise).

Warnings often happen if a calculation yields `NA`. Frequently, a warning can be prevented
by using the `na.rm = TRUE` argument, available for many aggregate functions such as `mean()` or `quantile()`. However, this sometimes leads to in-between results that might lead to errors later on, so it might be worth to think about implementing a check after such a call.

Alternatives are using `is.na()`, `complete.cases()` or `na.omit()` to check for and deal with missing values. Depending on the use case, one can sometimes impute missing observations by merging supplemental ‚fill-up‘ data or using more complex statistical imputations like those supplied by the R packages `MICE` or `Amelia`. A nice overview about the most common imputation libraries is given (here).

In cases where a warning is harmless, the code in question can be wrapped with `suppressWarnings()`.
It should be a conscious decision of the R developer whether to ignore a warning or how to deal with it. An example for this are some rather warning happy functions in `ggplot2`, that still lead to acceptable graphics output.

 

Example: Web-based API services

If a service is for example supplied as a web-based API endpoint, an internal warning or an input error occurrence should result in an HTTP 400 (BadRequest) result code, so that the caller can correct the input if possible and submit it again.

All other errors are returned as HTTP 500 (InternalServerError), as the user cannot do much about the problem other than calling the help desk, reporting a bug or sometimes just waiting for the error to go away by itself.

Additional distinctions in the error conditions can be made to inform the user if re-trying the call with the same input may help, or if the problem seems permanent. A failing connection to an external database may resolve itself in a few hours, while another error might be more deterministic.

Obviously, providing a descriptive error message additionally to the HTTP code is imperative. Depending on interest, further examples of implementing such behaviour could be given in another post.