Difference between revisions of "Monai Usage"

From UFRC
Jump to navigation Jump to search
(Created page with "Back to Monai ==MONAI core== NGC container usage: ml purge ml ngc-monai/<version> python <your__python_script> ==MONAI label== NGC container usage: #Server: #*To start...")
 
 
(4 intermediate revisions by the same user not shown)
Line 8: Line 8:
 
==MONAI label==
 
==MONAI label==
 
NGC container usage:
 
NGC container usage:
#Server:
+
=== 1. Server - to start the server as a slurm job: ===
#*To start the server as a slurm job:
+
*Load modules:
#**<pre>ml purge</pre>
+
  ml purge
#**<pre>ml ngc-monailabel/<version></pre>
+
  ml ngc-monailabel/<version>
#**<pre>sbatch start_monai_server_readonly.sh</pre>
+
*Copy to your directory the file:
#*Note server address from job output: used in next step
+
  /apps/nvidia/containers/monai/start_monai_server_readonly.sh
#3DSlicer client
+
*Copy e.g. these directories to a place you own (may vary by use case):
#*Start Open On Demand (OOD) session
+
  cp -r /apps/nvidia/containers/monai/apps/deepedit <my_place>
#*Start Console in hwgui with 1 GPU: gpu:geforce:1
+
  cp -r /apps/nvidia/containers/monai/datasets/Task09_Spleen <my_place>
#*In console: load & start Slicer
+
*Modify the start_monai_server_readonly.sh line to read:
#**<pre>ml qt/5.15.4 slicer/4.13.0</pre>
+
  singularity exec -B /apps/nvidia/containers/monai /apps/nvidia/containers/monai/monailabel/ monailabel start_server --app <my_place>/deepedit --studies <my_place>/Task09_Spleen/imagesTr
#**<pre>vglrun -d :0.$CUDA_VISIBLE_DEVICES Slicer</pre>
+
  or, for a newer version e.g. (the apps/... and datasets/... directories may be different)
#*In Slicer GUI:
+
  singularity exec -B /apps/nvidia/containers/monai /apps/nvidia/containers/monai/monailabel.0.6.0/0.6.0 monailabel start_server --app <my_place>/deepedit --studies <my_place>/Task09_Spleen/imagesTr
#**Select module: Active Learning -> MONAILabel
+
*Start server as a batch job:
#**Fill in server address, e.g.: http://c1007a-s17:8000/
+
  sbatch start_monai_server_readonly.sh
#**Click on refresh button next to server address
+
*Note server address from job output: used in next step
#**Load Next Sample
+
=== 2. 3DSlicer client ===
#*You are good to go! Enjoy!
+
*Start Open On Demand (OOD) session
 +
*Start Console in hwgui with 1 GPU: gpu:geforce:1
 +
*In console: load & start Slicer
 +
  ml qt/5.15.4 slicer/4.13.0
 +
  vglrun -d :0.$CUDA_VISIBLE_DEVICES Slicer
 +
*In Slicer GUI:
 +
**Select module: Active Learning -> MONAILabel
 +
**Fill in server address, e.g.: http://c1007a-s17:8000/
 +
**Click on refresh button next to server address
 +
**Load Next Sample
 +
*You are good to go! Enjoy!

Latest revision as of 16:41, 2 August 2023

Back to Monai

MONAI core

NGC container usage:

ml purge
ml ngc-monai/<version>
python <your__python_script>

MONAI label

NGC container usage:

1. Server - to start the server as a slurm job:

  • Load modules:
  ml purge
  ml ngc-monailabel/<version>
  • Copy to your directory the file:
  /apps/nvidia/containers/monai/start_monai_server_readonly.sh
  • Copy e.g. these directories to a place you own (may vary by use case):
  cp -r /apps/nvidia/containers/monai/apps/deepedit <my_place>
  cp -r /apps/nvidia/containers/monai/datasets/Task09_Spleen <my_place>
  • Modify the start_monai_server_readonly.sh line to read:
  singularity exec -B /apps/nvidia/containers/monai /apps/nvidia/containers/monai/monailabel/ monailabel start_server --app <my_place>/deepedit --studies <my_place>/Task09_Spleen/imagesTr
  or, for a newer version e.g. (the apps/... and datasets/... directories may be different)
  singularity exec -B /apps/nvidia/containers/monai /apps/nvidia/containers/monai/monailabel.0.6.0/0.6.0 monailabel start_server --app <my_place>/deepedit --studies <my_place>/Task09_Spleen/imagesTr
  • Start server as a batch job:
 sbatch start_monai_server_readonly.sh
  • Note server address from job output: used in next step

2. 3DSlicer client

  • Start Open On Demand (OOD) session
  • Start Console in hwgui with 1 GPU: gpu:geforce:1
  • In console: load & start Slicer
 ml qt/5.15.4 slicer/4.13.0
 vglrun -d :0.$CUDA_VISIBLE_DEVICES Slicer
  • In Slicer GUI:
    • Select module: Active Learning -> MONAILabel
    • Fill in server address, e.g.: http://c1007a-s17:8000/
    • Click on refresh button next to server address
    • Load Next Sample
  • You are good to go! Enjoy!