The execution of notebooks and the execution of containers are two of the most demanding aspects of the project in terms of infrastructure and computational resources required. They can both be disabled, resulting in a data server that offers data query, retrieval, and visualization capabilities. They both allow a great deal of freedom to execute code on a shared resource, and so must be considered carefully in terms of offering access due to the security considerations and resource constraints.
There is a balance to strike between what is executed in the notebook's kernel, what is executed by the data server (including its microservices), and what is sent to the queuing system for batch execution. Some things may be duplicated, but we have tried to create a balance based upon the task. When things are executed in the Python kernel they tend to be relatively short-lived tasks, or calls to the data server. The data server is able to cache results so that they can be retrieved in other notebooks, over the web interface, or using other methods without executing each time, such as 3D coordinate generation using Open Babel, or molecular orbital calculation. Longer lived geometry optimizations, executing quantum codes, and running machine learning kernels are usually performed using the queuing system so that they can use HPC resources and images.