Difference between revisions of "Monai Usage"
Jump to navigation
Jump to search
(Created page with "Back to Monai ==MONAI core== NGC container usage: ml purge ml ngc-monai/<version> python <your__python_script> ==MONAI label== NGC container usage: #Server: #*To start...") |
|||
Line 12: | Line 12: | ||
#**<pre>ml purge</pre> | #**<pre>ml purge</pre> | ||
#**<pre>ml ngc-monailabel/<version></pre> | #**<pre>ml ngc-monailabel/<version></pre> | ||
+ | #**<pre>Copy to your directory the file:</pre> | ||
+ | #***<pre>/apps/nvidia/containers/monai/start_monai_server_readonly.sh</pre> | ||
+ | #**<pre> Copy these directories to a place you own, e.g.</pre> | ||
+ | #***<pre>cp -r /apps/nvidia/containers/monai/apps/deepedit <my_place></pre> | ||
+ | #***<pre>cp -r /apps/nvidia/containers/monai/datasets/Task09_Spleen <my_place></pre> | ||
+ | #**<pre>Modify the start_monai_server_readonly.sh line to read:</pre> | ||
+ | #***<pre>singularity exec -B /apps/nvidia/containers/monai /apps/nvidia/containers/monai/monailabel/ monailabel start_server --app <my_place>/deepedit --studies <my_place>/Task09_Spleen/imagesTr</pre> | ||
#**<pre>sbatch start_monai_server_readonly.sh</pre> | #**<pre>sbatch start_monai_server_readonly.sh</pre> | ||
#*Note server address from job output: used in next step | #*Note server address from job output: used in next step |
Revision as of 21:11, 1 August 2023
Back to Monai
MONAI core
NGC container usage:
ml purge ml ngc-monai/<version> python <your__python_script>
MONAI label
NGC container usage:
- Server:
- To start the server as a slurm job:
ml purge
ml ngc-monailabel/<version>
Copy to your directory the file:
/apps/nvidia/containers/monai/start_monai_server_readonly.sh
Copy these directories to a place you own, e.g.
cp -r /apps/nvidia/containers/monai/apps/deepedit <my_place>
cp -r /apps/nvidia/containers/monai/datasets/Task09_Spleen <my_place>
Modify the start_monai_server_readonly.sh line to read:
singularity exec -B /apps/nvidia/containers/monai /apps/nvidia/containers/monai/monailabel/ monailabel start_server --app <my_place>/deepedit --studies <my_place>/Task09_Spleen/imagesTr
sbatch start_monai_server_readonly.sh
- Note server address from job output: used in next step
- To start the server as a slurm job:
- 3DSlicer client
- Start Open On Demand (OOD) session
- Start Console in hwgui with 1 GPU: gpu:geforce:1
- In console: load & start Slicer
ml qt/5.15.4 slicer/4.13.0
vglrun -d :0.$CUDA_VISIBLE_DEVICES Slicer
- In Slicer GUI:
- Select module: Active Learning -> MONAILabel
- Fill in server address, e.g.: http://c1007a-s17:8000/
- Click on refresh button next to server address
- Load Next Sample
- You are good to go! Enjoy!