Patient-specific medicine is the tailoring of medical treatments based on the characteristics of an individual patient. Decision support systems based on patient-specific simulation hold the potential of revolutionising the way clinicians plan courses of treatment for various conditions, such as viral infections and lung cancer, and the planning of surgical procedures, for example in the treatment of arterial abnormalities. Since patient-specific data can be used as the basis of simulation, treatments can be assessed for their effectiveness with respect to the patient in question before being administered, saving the potential expense of ineffective treatments and reducing, if not eliminating, lengthy lab procedures that typically involve animal testing.
In this article we explore the technical, clinical and policy requirements for three distinct patient-specific biomedical projects currently taking place: the patient-specific modelling of HIV/AIDS therapies, cancer therapies, and addressing neuro-pathologies in the intracranial vasculature. These patient-specific medical simulations require access to both appropriate patient data and the computational and network infrastructure on which to perform potentially very large-scale simulations. The computational resources required are supercomputers, machines with thousands of cores and large memory capacities capable of running simulations within the time frames required in a clinical setting; the validity of results not only relies on the correctness of the simulation, but on its timeliness. Existing supercomputing site policies, which institute ‘fair share’ system usage, are not suitable for medical applications as they stand. To support patient-specific medical simulations, where life and death decisions may be made, computational resource providers must give urgent priority to such jobs, and/or facilitate the advance reservation of such resources, akin to booking and prioritising pathology lab testing.
Recent advances in advance reservation and cross-site run capabilities on supercomputers mean that, for the first time, computation can be envisaged in more than a scientific research capacity so far as biomedicine is concerned. One area where this is especially true is in the clinical decision-making process; the application of large-scale computation to offer real-time support for clinical decision-making is now becoming feasible. The ability to utilise biomedical data to optimise patient-specific treatment means that, in the future, the effectiveness of a range of potential treatments may be assessed before they are actually administered, preventing the patient from experiencing unnecessary or ineffective treatments. This should provide a substantial benefit to medicine and hence to the quality of life of human beings.
Traditional medical practice requires a physician to use judgement and experience to decide on the course of treatment best suited to an individual patient’s condition. While the training and experience of physicians hone their ability to decide the most effective treatment for a particular ailment from the range available, this decision making process often does not take into account all of the data potentially available. Indeed in many cases, the sheer volume or nature of the data available makes it impossible for a human to process as part of their decision making process, and is therefore discarded. For example, in the treatment of HIV/AIDS, the complex variation inherent within data generated by analysis of viral genotype resulting in a prediction of phenotype (in terms of viral sensitivity to a number of treatments) makes the selection of treatment for a particular patient based on these predictions fairly subjective.