MZmine 2 is an open-source framework for processing, visualization and analysis of mass spectrometry based molecular profile data. It is based on the original MZmine toolbox described in 2006 Bioinformatics publication.
- HPC_MZMINE_DIR - installation directory
mzMine can be run in a batch mode according to mzMine Manual by calling startMZmine with a single argument corresponding to a saved batch script generated within the GUI.
We provide an alternate startMZmine script that correctly sets the HEAP memory based on either the HPC_MZMINE_MEM environment variable or, if that variable if absent, based on the total amount of requested memory within a job. Pleas see the sample script below. Note that it appears to be necessary to simulate a virtual X11 environment for mzMine to run in batch mode.
Note that HiPerGator2 nodes are diskless, so '
/tmp' directory that mzMine uses by default for its temporary files cannot be used. See Temporary Directories for details on how to set $TMPDIR variable that points to a directory in your /ufrc space.
[jdoe@gator3 mzmine]$ module load mzmine
Let's try the launcher script to star mzMine in a gui session under SLURM and wrap it in Xpra, so we can connect from the outside. Here's the help message when you get if you use the '-h' argument'
[jdoe@gator3 mzmine]$ launch_mzmine_gui -h
Usage: launch_xpra_gui_$application [options] Options: -h - show this help message -e <executable> - program to run (REQUIRED) -m <memory> - memory, gb (default is 4gb) -p <procs> - processor cores, (default is a single core) -t <time> - SLURM time limit, hrs (default is 4hrs) -a <account> - SLURM account (default is your main account) -b - Use burst SLURM qos (default is main qos) -n - Do not wait until the job starts. I will run xpra_list_sessions later -j - Set up environment for a Java program -f <jobfile> - Job script to use for the gui session -l - List application presets -v - Verbose output to show the submission information Defaults will be used for missing values
Alright, let's do a test run. Let's say you wanted to use 6gb of memory and run MZmine for 24 hours.
[jdoe@gator3 test_directory]$ launch_mzmine_gui -m 6 -t 24
Starting mzmine under Xpra in a SLURM gui session. Requested mzmine memory size: '6gb' Requested '4' processor cores Requested '24' hours for the session Waiting for the job to start as '-n' (nowait) argument was not specified. The mzmine job '1490493' has started. Listing all active xpra sessions. Refreshing the session list for jdoe to remove stale sessions List of active Xpra sessions for jdoe: Session: i21a-s3.rc.ufl.edu:8325 Job ID: 1490493, Name: mzmine Client command: xpra attach ssh:email@example.com:8325 or (downloaded script): ./xpra attach ssh:firstname.lastname@example.org:8325 See https://wiki.rc.ufl.edu/doc/Xpra for general documentation on gui sessions
Since the job already started we already see a live Xpra session in the above output. Otherwise, we'd have to wait for the job to start and then run xpra_list_sessions (load 'gui' module if needed to access the command).
Let's connect to UF VPN (https://kb.helpdesk.ufl.edu/FAQs/VPNInstructions) and then use the xpra script from https://wiki.rc.ufl.edu/doc/Xpra#Microsoft_Windows in a MobaXTerm terminal on a windows client machine (for example). If you run
sh xpra attach ssh:email@example.com:7176
in MobaXTerm you should see a password window from TortoisePlink. After you enter the password you should see the mzMine GUI show up on your local machine. Do not close the program with the [x] close button in the right-top corner of mzMine GUI window unless you want the job to complete. Click on the MobaXTerm terminal where you started the command and use 'Ctrl+c" key combination to detach from the session, so you could re-attach later.