Post-Event Report

PharmaTec Series 2020 Post Event Report

By Oliver Picken |
28 August 2021
Bringing you the latest news from Oxford Global’s Pharma I.T. and Data, A.I. in Drug Development and SmartLabs Laboratory Informatics congresses

On the 24 – 25th September 2020, Oxford Global once again hosted our ever-popular PharmaTec series, featuring three outstanding congress programmes: PharmaIT & Data, AI in Drug Development and SmartLabs & Laboratory Informatics. Together these events bought together over 500 leading pharma, academic and biotech delegates attending more than 90 exciting presentations across topics including advancements in AI, digital lab workflows and data management. Key speakers included Maria Del Pilar Schneider (Senior Data Mining Statistician at Ipsen), Kelly Zou (Head of Medical Analytics at Pfizer Upjohn), Lili Peng (Associate Director – External Innovation Data Sciences at Biogen), Richardus Vonk (Head of Oncology Statistics and Data Management at Bayer), Jackie Hunter (Board Director and Scientific Advisor at Benevolent AI), and Cindy Novak (Senior Manager, Lab Computing at Bristol-Myers Squibb).

Key Topics

Pharmaceutical I.T. and Data

The 18th Annual Pharma I.T. and Data Congress delved into recent advancements in FAIR data, data management and data visualisation. New to the 2020 programme was a dedicated section focusing on data analytics and handling in drug discovery. Highlights included a featured session on quantum computing by Evert Geurtsen (Oxford University), machine learning for drug design by Haakan Joensson (KTH Royal Institute of Technology) and a panel discussion examining the latest news on FAIR data and data harmonisation moderated by Erik Schultes (Go FAIR).

Artificial Intelligence in Drug Development

The A.I. in Drug Development Congress returned for the 4th year, taking an in-depth look at utilising A.I. in drug discovery and molecular design as well as A.I. in translational medicine & personalising patient treatments. The programme expanded to include retrosynthetic tools on drug discovery and a panel discussion on transforming R&D with A.I., featuring emerging biotech companies such as Exscientia, HealX and NuMedi. Some of the most popular topics included Jackie Hunter’s (BenevolentAI) presentation on identifying novel targets with AI and a roundtable discussion moderated by Michael Rebhan’s (Novartis) on decentralizing AI in innovation ecosystems.

SmartLabs and Laboratory Informatics

Building on the success of last year’s event, the 2nd Annual SmartLabs & Laboratory Informatics Congress featured a deep dive into advanced I.T. tools and technologies, both currently available and in development. Some of our most popular presentations covered the required standards to build smart laboratories and the myriad uses of automation and robotics in a laboratory setting. A particular focus this year was workflow automation, examples included a presentation on adopting an integrated continuum in labs by Aaron Blankenship (Bayer AG) and insight into connected labs and automation by Angelika Fuchs, which you can read a condensed version of here.

Market Trends

Machine Learning and Personalised Treatments

Advances in machine learning are helping us understand disease mechanisms in new ways. With many drugs currently on the market, the therapeutic benefits and severity of side effects can differ from patient to patient This is particularly true in cancer treatment. One of the possible reasons for this is that certain types of cancers are grouped under one label when they are disparate diseases.  Another is the simple fact that every patient and every cancer is different. Therefore, creating a treatment tailored to the individual is sometimes the only way to guarantee an effect.

In cancer, when a mutation occurs in a cell’s DNA, a substitution takes place. Our immune system marks this substitution as an invader and begins to target it.  The mutated peptides that appear on the surface of cancer are known as neoantigens. Machine learning opens an opportunity to identify the amino acid sequence of neoantigens, which allows the rapid development of personalised treatments. As a result, experts predict that machine learning will enable revolutionary breakthroughs in healthcare and empower new technology that can significantly reduce healthcare costs.

A.I. in Drug Discovery and Development

Artificial Intelligence has been something of a buzzword in recent years due to its promise for faster and more cost-efficient development of drugs. With the cost of developing and getting approval for a new drug rising to an average of 2.8 billion dollars and timelines for discovery to market rarely under a decade, it is no surprise that technology with the potential to decrease both costs and time would be of interest to the industry.

A.I. has a range of applications outside of initial discovery; we can use A.I. for drug target identification and validation, cell target classification or diagnosis, and improved drug design. Furthermore, A.I. can help preclinical and clinical trials by assisting researchers to more quickly and successfully predict how a drug might react in specific contexts.

Companies can also run existing drugs through A.I. drug repurposing platforms to discover new applications. Finding new uses for currently available drugs is hugely beneficial to companies as these drugs have already passed through regulatory approval processes and can make it to market in much less time than novel compounds.

Cloud Computing

The pharmaceutical industry has long been considered slow to adopt new technology. The rise of cloud computing, however, has seen rapid penetration into the industry. Approximately 83% of I.T. executives in healthcare and pharmaceutical organisations utilise cloud services. There are several advantages of cloud computing over traditional data storage and management.  

Protecting data and privacy is one of the most critical priorities of data security. Several cloud solutions, including SaaS, LaaS and PaaS, offer answers to privacy, availability, and security concerns and advantages such as integrated workflows and collaborative tools for teams and usability across multiple hardware formats.

Additionally, another key advantage is the ability to create a centralised data pool, enabling the possibility to track, share, monitor, and preserve data easily. This provides an alternative to in-house server farms or costly local backup systems. Most platforms also have inbuilt authorisation systems ensuring data is only available to validated individuals. In short, cloud computing can help move Big Data from being a problem to being an asset.

The Paperless Revolution

Physical paper has long been a staple of laboratories and data collection, but the digital revolution is now in full swing and such outdated methods are being left behind. Even Einstein’s papers have now been digitalized! The major barriers for going digital are cost, the training and impetus to change workflows and the wide range of available options. For this reason, many companies are reluctant to make the switch.

Electronic Laboratory Notebooks (ELNs) come with many benefits compared to traditional paper notebooks. One key feature is searchability. You can easily find the procedure or results of a test that took place months or years ago in seconds rather than manually sorting through paperwork. Another selling point of ELNs is their multimedia capability; you can easily add videos, pictures, equations, graphics and charts. They are also increasing able to automatically pull data from connected devices, automating some of the more tedious data capture tasks. For those that prefer to handwrite notes, this is still possible in many ELNs and handwriting recognition tools that turn writing into text are improving rapidly.

Data Management: Handling Extreme Volumes

Drug companies, scientists, and technologists have collected an unfathomable amount of data through their research and testing. This information is pivotal in the continued development and advancement of the medical and pharmaceutical industries.  

Unfortunately, data is typically collected from several sources and spread across multiple silos, making it extremely difficult to analyse, integrate and apply effectively. The volume of information is so enormous that new forms of technology are required to contextualise and create searchable and usable datasets. As data collection continues to grow in volume and become ever more complex, the challenge is maintaining an ability to interpret it.  Data gathered by pharmaceutical and life science companies are challenging for A.I. programs due to the characteristics of biological data.  Biological data is so complex that it is incomprehensible to most A.I. programs, meaning that most pharmaceutical research data is analysed manually.

Fortunately, there has been a great deal of work conducted on meeting this challenge. One approach is to iterate and improve on currently available data sets, making them easier for A.I. to analyse. An example of this is 2009’s HITECH Act, which attempted to standardise EMR systems and create better biological data sets.

The second way to deal with complex data is to improve or create new artificial intelligence. For example, context Normalisation, a relatively new technology, uses Natural Language Processing (NLP) and text analytics to understand unstructured data points. As artificial intelligence improves, they are gaining the ability to produce novel hypotheses without the need for costly human interaction and intervention. 

Laboratory Automation: The Difficulties of Changing Workflows

Laboratory automation has changed pharmaceutical testing; instead of a human being manually interacting with specimens, automation does the work. Automation has forever altered laboratories and comes with many advantages, including reduced cost of testing with fewer required personnel. Implementing automation, however, comes with several challenges.

One of the most significant hurdles is the requirement for workers to learn how to use new hardware and software. Knowing how to monitor equipment and when to perform maintenance requires an understanding of the mechanical workings of the testing hardware. Laboratory automation also requires complicated software to track the test orders, direct the specimens, interpret findings, and maintain patient and quality control records. For non-specialists utilising automation hardware and software requires learning what the automation system is doing and why on a systemic level. This is further complicated because the automation process rarely directly translates the strategies learned in manual specimen handling.

Understanding the new workflows brought in by automation is often a long process, which may create new demands for workers, or create staffing challenges, due to changes in hours and responsibilities. For this reason, many companies choose to implement automation slowly despite the benefits it brings.

Data Integrity

Data integrity is essential in any industry, but the stakes are higher than average in the pharmaceutical industry. Any data error could mean severe consequences and even endanger people’s lives. The general definition of data integrity is ‘the maintenance and assurance of data consistency and accuracy throughout its life cycle’.

In recent years there has been a steep increase in the number of health authority enforcement actions, such as warning letters, import alerts, product detentions and suspensions or revocations of marketing authorisations due to lacklustre data integrity practices in the manufacture and testing of pharmaceuticals. In early 2018 the ‘European Medicines Agency and Medicines’ and the ‘Healthcare products Regulatory Agency U.K.’ issued a document on data integrity that underlines the importance of ALCOA+. ALCOA+ is a framework that ensures data integrity and sets out best practices, and it is essential that pharmaceutical companies prove compliance.

The cost of meeting data integrity requirements is substantial. As a result, some companies have taken the unwise decision that the risk of penalties could be offset by not investing in new technology. Unsurprisingly, this attitude has had consequences, damaging share prices and harming the reputation of several well-known companies. Those more risk-averse may opt for cheaper systems, though these can prove a false economy when ongoing validation work drives up the overall cost.

Several more expensive ‘off-the-shelf’ solutions are available that attempt to solve this problem by providing a complete route to compliance and enabling businesses to quickly scale up without losing sight of their data. With that said, no system is ever infallible, and data integrity has as much to do with human error management and workplace culture as it does with technology. Progress in this area is incredibly important within the current climate of increased health authority enforcement actions.

Conclusion

Pharmacology is growing exponentially and as it expands, so does the need for new and innovative technology and data management solutions. The importance of artificial intelligence, machine learning, automation, and robotics will only grow in the future as researchers attempt to tackle more complex problems and potential novel treatments. Likewise, the push for the digital evolution of data management and connected smart labs are creating new opportunities for innovation by giving researchers back their time to focus on their research. If you are interested in pharmaceutical technology and optimising laboratories for the 21st century, you can view our upcoming PharmaTec series events here.

Speaker Biographies

Lili Peng (Associate Director of Data Science at Biogen)

Dr. Lili Peng is currently the Associate Director of Data Sciences in Biogen’s External Innovation unit, in which she is currently applying advanced analytics and AI/ML capabilities to enable external innovation for expanding Biogen’s drug portfolio. Formerly a technology consultant at Booz Allen Hamilton, she has experience in data science, data management, and software implementation projects at the U. S. Food and Drug Administration.  Prior to that, she worked as an informatics scientist at AstraZeneca providing informatics solutions in research and development, medical affairs, and medical evidence and observational research.  Lili completed her postdoctoral training in computational chemistry and bioinformatics, respectively, at University of California, San Francisco and Stanford University.  Prior to that, she did her Ph.D. in bioengineering from the University of California, San Diego in computational modelling of an anticancer polymeric drug delivery system.  She holds a B.S. in chemical engineering from the Massachusetts Institute of Technology.

Richardus Vonk (Head of Oncology Statistics and Data Management at Bayer)

Dr. Richardus Vonk leads the Oncology Statistics and Data Management at Bayer AG. He is located in Berlin, Germany, and has over 30 years of experience in research and pharmaceutical development. Richardus regularly speaks about quantitative decision making in a changing pharmaceutical environment. His current scientific interest is in method development for early pharmaceutical development, biomarker development, and the transition between different phases of clinical development, all with a clear focus on quantitative decision making. Richardus has an MSc in Mathematics from the University of Nijmegen and obtained his PhD at the Free University Berlin.

Jackie Hunter (Board Director of BenevolentAI)

Jackie Hunter has over thirty years of experience in the bioscience research sector, working across academia and industry including leading neurology and gastrointestinal drug discovery and early clinical development for GlaxoSmithKline. Before joining BenevolentAI, Jackie was CE of the Biotechnology and Biological Sciences Research Council (BBSRC). BenevolentAI is a British held AI company which is using AI to augment the research capabilities of drug scientists, radically changing the way R&D is done.

Cindy Novak (LabOps Manager at Bristol Myers Squibb)

Cindy has been working in the Biotech and Pharmaceutical industries for over 20 years. During her career, Cindy has participated in numerous LIMS projects in roles ranging from subject matter expert to technical specialist as well as participating in various projects for Empower, OpenLab, ELNs and lab instruments. She has worked at multiple companies as they navigate through the M&A process, both as the acquiring company and the acquired company and recently worked on a project to deploy an integrated Laboratory Systems Solution for a new manufacturing facility in Ireland. Cindy is currently working for Juno Therapeutics, Bristol Myers Squibb Company as the LabOps Manager for the Laboratory Computerized Systems (LCS) team in Bothell, WA

Share this article

Share on facebook
Share on twitter
Share on linkedin

You may also be interested in...

Commentary
How can pharma companies harness the power of data using innovative technologies?
04 September 2021

Continue browsing

Share this article

Share on facebook
Share on twitter
Share on linkedin

Join our PharmaTec mailing list

We produce cutting edge congresses and summits for the Life Sciences Industry, bringing together industry leaders and solution providers at a senior level, creating the opportunity to partner, network and knowledge share.

Contact Us:

Copyright Oxford Global Marketing Limited. All rights reserved.

Stay up to date

Sign up for our monthly Editorial Newsletter to keep up with all things PharmaTec

Submit your details to receive the monthly newsletter & to be kept up to date about relevant events, monthly discussion groups and portal membership offers. You may opt-out at any time. Please check our Privacy Policy to see how Oxford Global protects and manages your data.