Waters likes sound of new LC-MS software

Skip to Navigation

News

  • Published: Jun 3, 2016
  • Author: Jon Evans
  • Source: Waters Corporation
  • Suppliers: Waters Corporation
  • Channels: HPLC / Laboratory Informatics / Chemometrics & Informatics / Base Peak

Waters Corporation has launched its Symphony Data Pipeline software, a client-server application for laboratories acquiring and archiving large amounts of data. Symphony software automates the movement and transformation of large amounts of LC-MS data to speed up analytical workflows and save time, reduce human error, and liberate scientists from the mundane yet necessary tasks associated with managing data files.

“Just the simple step of being able to seamlessly and automatically copy raw files to a remote file location while a column is conditioning, maximizes the time we can use the instrument for analysis,” said Paul Skipp, director of the Proteomics Research Centre at the University of Southampton in the UK. “Previously the mass spectrometer might stand idle for one to two hours while an operator copies data to a filestore in preparation for processing. With three Synapt mass spectrometers generating data 24/7 in our laboratory, this alone is a major advance.”

Symphony Data Pipeline can create efficiencies for laboratories in other ways too.

“We see great value in the modular nature of Symphony Data Pipeline; it allows us to rapidly develop and test new processes for handling experimental data, including real-time QC, prospective fault detection and tools for ensuring data integrity,” said Jake Pearce, informatics manager at the MRC-NIHR National Phenome Centre, Imperial College, London, UK. “Not just that, but it can save months of processing time and, in combination with noise reduction, petabytes of file storage.”

Symphony software allows scientists to ‘personalize’ the processing of UPLC-MS data acquired by Waters MassLynx mass spectrometry software. Once the data are acquired, Symphony software can execute various actions, which include: moving the data file to a server; subtracting background noise; compressing, renaming and copying data; or running a series of executables on the data. Users can also design and implement their own set of data processing tasks. This series of operations, known as a pipeline, is set up by the operator and it occurs in the background as soon as the data is acquired by either a single PC, a network of PCs or multiple PCs connected to a network of laboratories.

Social Links

Share This Links

Bookmark and Share

Microsites

Suppliers Selection
Societies Selection

Banner Ad

Click here to see
all job opportunities

Copyright Information

Interested in separation science? Visit our sister site separationsNOW.com

Copyright © 2017 John Wiley & Sons, Inc. All Rights Reserved