This is the top of the page.
Displaying present location in the site.
  1. Home
  2. Products & Solutions
  3. NEC HPC Solutions
  4. LX Series
  5. Software
Main content starts here.

NEC LX Software

Software development

While other vendors rely on third party development, with regards to monitoring and control, we additionally have our own software team that develop software for our clusters. The added value over control of the software and the knowledge gained from its development helps NEC to provide better solutions to our customers.

NECs software development group also develops compiler, MPI and other tools for the NEC HPC SX Series

LXC³ Cluster Command and Control

LXC³ is NEC's cluster command and control stack for the LX series high performance Linux clusters. It integrates more than one decade of own cluster administration experience at HPC data centers of all sizes, know-how from using and actively developing open source software with new ideas from our research and development activities.

Provisioning System

The LXC³ provisioning system is image based and supports stateless and stateful deployments of cluster nodes, allowing for completely diskless compute nodes, hybrid nodes with parts of the system on a disk, parts in RAM or on NFS, or full on-disk installations. It builds on top of a forked version of the Perceus open source project which NEC has decided to maintain, improve and extend. 
The image based provisioning system is imposing but not enforcing an administration methodology for clusters. The main cluster node synchronization point is its central Virtual Node File System (VNFS), which allows to maintain a single system image for many cluster nodes. Node specific settings like IP addresses and hostname are auto-configured during deployment of the system. The central administration paradigm is complemented by a procedural administration method through which cluster nodes regularly check and pull configuration modules that can adjust their setup. Tracking changes within a VNFS image or configuration modules can be easily done with the integrated versioning features. 
LXC³ provides scripts to create VNFS images for common enterprise grade Linux operating systems, however other Linux based distributions can be integrated using a regularly installation “golden client” node as source for the VNFS image.

LXC3 Provisioning

Monitoring and Alert System

LXC³ managed clusters come configured with performance and health monitoring systems based on the well known and widely used open source solutions Ganglia and Nagios, complemented by self-developed tools. LXC³ includes these industry-standard monitoring systems which are shipped fully configured with a variety of custom and in-house agents, and metrics. The monitoring systems are integrated with each other and report various sensor and system data for analysis and alerting purposes.

Resource Management

HPC clusters are shared among multiple users and the coordination of resources is done with the help of a batch or queueing system which includes a resource scheduler. LXC³ clusters come pre-configured and ready to use with a resource management system based either on the free and open source programs Torque and Maui or SLURM.

The resource management system is setup with one default queue. The user interface consists of a set of command line utilities, which enables full control over jobs and their resource definitions.

Key features of the job scheduler:

    • Backfilling
    • Fair Share Scheduling
    • Topology awareness
    • Job reservation
    • Interactive jobs
    • Job dependencies
    • Job accounting
    • Definition of own attributes

Other resource management systems (PBS-Pro, LSF, Grid Engine) can be integrated upon request.

Cluster Management

LXC³ offers centralized administration of cluster nodes comprising:

    • Fully automated deployment of stateful and stateless operating systems on cluster nodes
    • Parallel execution of administrative tasks on many or all cluster nodes

    • Distribution and collection of files to and from cluster nodes
    • Power management (on, off, reset) of cluster nodes
    • Graphical or test-based console access to cluster nodes

In addition LXC³ features tools for cluster users that facilitate using different environments for different compiler and library versions. These tools make it easy to switch between different versions of applications (e.g. ISV applications, MPI) or switch the development tools from one version to another.

Top of this page