Child pages
  • Licensed Software available on the HPC Cluster
Skip to end of metadata
Go to start of metadata

 

COMSOL

COMSOL is a problem-solving simulation environment, enforcing compatibility guarantees consistent multiphysics models. COMSOL Multiphysics is a general-purpose software platform, based on advanced numerical methods, for modeling and simulating physics-based problems. The package is cross-platform (Windows, Mac, Linux). The COMSOL Desktop helps you organize your simulation by presenting a clear overview of your model at any point. It uses functional form, structure, and aesthetics as the means to achieve simplicity for modeling complex realities. 

NOTE: This license is for academic use only with Floating Network Licensing in nature i.e., authorized users are allowed to use the software on desktops. Please contact hpc@nyu.edu for the license. However, COMSOL is also available on NYU HPC cluster mercer.

 To find out more details about Licensing click here to expand..

Here is the list of modules available and concurrent users count i.e., authorized to use the software.

Module NameQuantity
COMSOL Multiphysics3
AC/DC Module 1
CFD Module2
Chemical Reaction Engineering Module1
Heat Transfer Module1
LiveLink for Excel2
LiveLink for MATLAB2
Material Library2
Microfluidics Module2
Molecular Flow Module2
Particle Training Module2
RF Module1
Wave Optics Module1
Structural Mechanics1

License Availability on Prince:

Use command "comsol_licenses" at linux command prompt on hpc cluster (i.e., prince) to verify unused licenses from available list.


Using COMSOL on HPC Cluster:

Several versions of COMSOL are available on the HPC cluster. To use COMSOL on the HPC cluster, please load the relevant module in your batch job submission script:

comsol module

Running a parallel COMSOL job on HPC cluster (Prince):

To submit a COMSOL job for running on multiple processing elements, follow below steps.

Step-1:

Create a directory on "scratch" as given below.

Working on scratch

Step-2:

Copy example files to your newly created directory.

Copy Example

Step-3:

Edit the Sbatch script file (run-comsol.sbatch) accordingly as given below.

"cd /share/apps/examples/comsol" change to "cd /scratch/<net_id>/example/"

Step-4:

Once the Sbatch script file is ready, it can be submitted to the job scheduler using sbatch. After successful completion of job, verify output log file for detail output information.

sbatch

Return To Top

MATHEMATICA

Mathematica is a general computing environment with organizing algorithmic, visualization, and user interface capabilities. The many mathematical algorithms included in Mathematica make computation easy and fast.

Using Mathematica on HPC Cluster:

To run Mathematica on the HPC cluster, please load the relevant module in your batch job submission script:

mathematica module

Note: In the example below the module is loaded already in the sbatch script.

Running a parallel Mathematica job on HPC cluster (Prince):

To submit a Mathematica job for running on multiple processing elements, follow below steps.

Step-1:

Create a directory on "scratch" as given below.

Working on scratch

Step-2:

Copy example files to your newly created directory.

Copy Example

Step-3:

Edit the sbatch script file (run-mathematica.sbatch) accordingly as given below.

"cd /share/apps/examples/mathematica" change to "cd /scratch/<net_id>/example/"

Step-4:

Once the sbatch script file is ready, it can be submitted to the job scheduler using sbatch. After successful completion of job, verify output log file generated.

sbatch

Return To Top

MATLAB

MATLAB is a technical computing environment for high performance numeric computation and visualization. MATLAB integrates numerical analysis, matrix computation, signal processing, and graphics in an easy to use environment without using traditional programming.

Note: Software is available to all faculty, staff, and students. MATLAB can be used for non-commercial, academic purposes.

Using MATLAB on HPC Cluster:

There are several versions of Matlab available on the cluster and the relevant version can be loaded.

module load matlab/2017a

Running a MATLAB job on a compute node:

Run the PBS command below to get a compute node:

 

srun

After getting access to a compute node navigate to the folder using "cd /share/apps/examples/matlab/basic/". You can look at the sample basic sbatch script used to run the MATLAB job and create your own script. To run the script the below command can be used.

sbatch

SAS

SAS is a software package which enables programmers to perform information retrieval and data management, report writing and graphics, statistical analysis and data mining, business planning, forecasting, and decision support, operations research and project management, quality improvement, applications development, data warehousing (extract, transform, load), and platform independent and remote computing.

There are licenses for 2 CPUs on the HPC Cluster.

Using SAS on HPC Cluster:

To run SAS on the HPC cluster, please load the relevant module in your batch job submission script:

sas module

Note: In the example below the module is loaded already in the sbatch script.

Running a parallel SAS job on HPC cluster (Prince):

To submit a SAS job for running on multiple processing elements, follow below steps.

Step-1:

Create a directory on "scratch" as given below.

Working on scratch

Step-2:

Copy example files to your newly created directory.

Copy Example

Step-3:

Edit the sbatch script file (run-sas.sbatch) accordingly as given below.

"cd /share/apps/examples/sas" change to "cd /scratch/<net_id>/example/"

Step-4:

Once the sbatch script file is ready, it can be submitted to the job scheduler using sbatch. After successful completion of job, verify output log file generated.

sbatch

Return To Top

STATA

Stata is a command and menu-driven software package for statistical analysis. It is available for Windows, Mac, and Linux operating systems. Most of its users work in research. Stata's capabilities include data management, statistical analysis, graphics, simulations, regression and custom programming. 

Using STATA on HPC Cluster:

To run STATA on the HPC cluster, please load the relevant module in your batch job submission script:

sas module

Note: In the example below the module is loaded already in the sbatch script.

Running a parallel STATA job on HPC cluster (Prince):

To submit a STATA job for running on multiple processing elements, follow below steps.

Step-1:

Create a directory on "scratch" as given below.

Working on scratch

Step-2:

Copy example files to your newly created directory on scratch.

Copy Example

Step-3:

Edit the sbatch script file (run-stata.sbatch) accordingly as given below.

"cd /share/apps/examples/stata" change to "cd /scratch/<net_id>/example/"

Step-4:

Once the sbatch script file is ready, it can be submitted to the job scheduler using sbatch. After successful completion of job, verify output log file generated.

sbatch

Return To Top

GAUSSIAN

Gaussian uses basic quantum mechanic electronic structure programs. This software is capable of handling proteins and large molecules using semi-empirical, ab initio molecular orbital (MO), density functional, and molecular mechanics calculations.

Using Gaussian on HPC Cluster:

To run Gaussian on the HPC cluster, please load the relevant module in your batch job submission script:

gaussian module

Note: In the example below the module is loaded already in the sbatch script.

Running a parallel Gaussian job on HPC cluster (Prince):

To submit a Gaussian job for running on multiple processing elements, follow below steps.

Step-1:

Create a directory on "scratch" as given below.

Working on scratch

Step-2:

Copy example files to your newly created directory.

Copy Example

Step-3:

Edit the sbatch script file (run-gaussian.sbatch) accordingly as given below.

"cd /share/apps/examples/gaussian" change to "cd /scratch/<net_id>/example/"

Step-4:

Once the sbatch script file is ready, it can be submitted to the job scheduler using sbatch. After successful completion of job, verify output log file generated.

sbatch

Return To Top

Rogue Wave (formerly Totalview)

Totalview from Rogue Wave Software provides a set of parallel debugging tools that give users control over program execution within a single thread or multiple groups of processes or threads. It comes with GUI and a command line interface. Job scripts on the Mercer cluster can call a line as below to get the Totalview environment:

totalview module

For more information on the software, please visit its product web page, and read its documentation page.

Return To Top

Knitro

Knitro is a commercial software package for solving large scale mathematical optimization problems. Knitro is specialized for nonlinear optimization, but also solves linear programming problems, quadratic programming problems, systems of nonlinear equations, and problems with equilibrium constraints. The unknowns in these problems must be continuous variables in continuous functions; however, functions can be convex or nonconvex. Knitro computes a numerical solution to the problem—it does not find a symbolic mathematical solution. Knitro versions 9.0.1 and 10.1.1 are available.

totalview module

Running a parallel Knitro job on HPC cluster (Prince):

To submit a Knitro job for running on multiple processing elements, follow below steps.

Step-1:

Create a directory on "scratch" as given below.

Working on scratch

Step-2:

Copy example files to your newly created directory.

Copy Example

Step-3:

There is no sample sbatch script available for knitro.

Step-4:

After creating your own sbatch script you can execute it as follows:

sbatch

 

  • No labels