Application software available in the system / installed by user is to be used for academic purpose only and cannot be used for the monetary benefit of an individual or a company.
To protect the security of the system, the user should neither provide his/her password nor allow other individuals to use his/her account. The system administrators may verify the above fact at any point of time. If found guilty, the user id will be cancelled without further reference to the user.
Individuals who attempt to use accounts, files, system resources or other facilities without authorization or those who aid other individuals doing so, may be committing a criminal act and may be subjected to criminal prosecution.
It is the responsibility of each individual user to know what effects the use of certain programs and/or facilities can have on other users and/or facilities, whether it may damage system resources or severely inconvenience other users currently using the system.
System files and other application software installed by users and provided under license are not to be copied or tampered with.
A Project Report is to be submitted at the end of the project.(1 page max.)
Acknowledgement of the use of the center’s facilities should be made in journal publications, dissertations, theses, conference publications and reports published by the users.
Any outcomes of the project, in terms of publications (journal / conference proceedings etc.) should be communicated to the center.
Student Accounts will be deleted and the user files removed on graduation.
Do not go over your storage quota. Exceeding your storage quota can lead to many problems including batch jobs failing, confusing error messages and the inability to use X11 forwarding. Be sure to routinely run the "du -sh ~/ " command to check your usage. If more space is needed then remove files.
Do not run jobs on the Master node. When you connect to a cluster via SSH you land on the Master node which is shared by all users. The login node is reserved for submitting jobs, compiling codes, installing software and running short tests that use only a few CPU-cores and finish within a few minutes. Anything more intensive must be submitted to the Slurm job scheduler as either a batch or interactive job. Failure to comply with this rule may result in your account being suspended. For a quick Start please visit here
Do not allocate more than one CPU-core for serial jobs. Serial codes cannot run in parallel so using more than one CPU-core will not cause the job to run faster. Instead, doing so will waste resources. See the Slurm page for tips on figuring out if your code can run in parallel and for information about Job Arrays which allow one to run many jobs simultaneously.
Do not run jobs with a parallel code without first conducting a scaling analysis. If your code runs in parallel then you need to determine the optimal number of nodes and CPU-cores to use. The same is true if it can use multiple GPUs. To do this, perform a scaling analysis as described in Choosing the Number of Nodes, CPU-cores and GPUs.
Do not request a GPU for a code that can only use CPUs. Only codes that have been explicitly written to use GPUs can take advantage of GPUs. Allocating a GPU for a CPU-only code will not speed-up the execution time but it will increase your queue time, waste resources. Furthermore, some codes are written to use only a single GPU. For more see GPU Computing and Choosing the Number of Nodes, CPU-cores and GPUs.
Do not load environment modules using only the partial name. You should always specify the full name of the environment module (e.g., module load compilers/intel/parallel_studio_xe_2020.4.912) and on some clusters failing to do so will result in an error. Also, you should avoid loading environment modules in your ~/.bashrc file. Instead, do this in Slurm scripts and on the command line when needed.
Please do not use spaces while creating the directories and files.