Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

width40%

...

 

The Center for Research Computing

...

(

...

CRC) manages shared computational clusters for a wide range of research projects affiliated with Rice University and the Texas Medical Center.

Please visit the RCSG Home Page for more information.

Tutorials

Because our clusters are highly complex environments, we do expect our users to acquire a basic set of skills before they get started. We understand that the range of computing experience among researchers can vary widely. In any cross-disciplinary field, there exists a need for introductory material to get researchers up to speed on fundamental concepts. We've gathered the following list of tutorials on varying computing subjects that we think will be of use.

At the least, a cluster user should understand:

  • Basic file manipulation under Linux
  • Editing files under Linux
  • Writing and submitting job scripts
Tip

Start here: XSEDE HPC Tutorials

Many of the following links are borrowed from this list.

Linux

All of the clusters that make up our shared computing resources run the Linux operating system. Using Linux is very similar to using Windows or MacOS on the desktop level, but that changes when it comes time to perform tasks remotely, as we do with the RCSG clusters. In order to work on RCSG clusters, users must use the command line interface (CLI).

Scripting Languages

In order to submit compute jobs to the clusters, a basic understanding of shell scripting is required. Job submission scripts typically use Bash, but can employ more advance scripting techniques as well.

Parallel Programming (MPI)

Parallel Programming (OpenMP)

Software Debugging

...

The CRC administers and maintains infrastructure in four core service areas: Shared Computing Clusters, the Research Data Facility, the High-Speed Data Transfer infrastructure, High-Performance Visualization, and Virtual Machines. While these services often fit together, each provides unique capabilities designed to accommodate specific research tasks.

This page gives an overview of these services, with links to more detailed information on each of them. If you have any questions or find that the services below do not meet your research computing needs, contact our CRC facilitators via ticket.

High-performance computing

High-powered computing resources for big datasets.

...

    • CRC is to serve as an onramp for our researchers to scale their problems up to the minimum size requirements at big national supercomputing resources. One such national resource is the Extreme Science and Engineering Discovery Environment

...

TeraGrid/XSEDE FAQ

Column
width10%
 
Column
width40%
Tip

Get an Account

Tip

Which System Should I Request?

Tip

How Much Does It Cost?

Tip

File a help request ticket

Cluster Documentation

Software Documentation

 

Research Data Facility (RDF)

When your project won’t fit on, or shouldn’t be on, an external hard disk.

High-speed data transfer

When your project is too big for a regular FTP transfer. Specialized, NSF-funded infrastructure for moving large-scale data sets

  • DTN/DTN2/DTN HA (Globus Data transfer nodes) Getting Started

  • Science DMZ (documentation coming soon)

Visualization

Help with Visualization of data.

Virtual Machines

Help when your project needs virtual machines.

  • Commercial options currently available through Amazon AWS and Google Cloud Computing. The CRC can help you to get a quote and even to get started. Google currently offers $300 in credits to new users.
  • Owl Research Infrastructure Open Nebula (ORION) VM Pool on Rice Campus. Getting started
  • Decommissioned Resources
    • Shared Pool of Integrated Computing Environments (SPICE) Decommissioned: March 27th, 2018

Help