Success Stories

Solutions Solved

As a prominent team member on a multibillion-dollar program supporting diverse test, development and operational environments at numerous geographically separated locations across the United States, RPI-CS, Inc. is well suited to large enterprises. We have been involved in a wide range of Software Modernization projects, such as Oracle/Sun Microsystems Solaris 8 to 10, Microsoft Windows Server 2003 to 2008 and VMware ESXi 4.x to 5.x. Additional projects include numerous large datacenter physical server to virtualized server migrations in order to reduce datacenter administration, overhead, densities and overall operating costs.

We were tasked to support a major hardware refresh for our customer, which was driven by several requirements. The primary requirement was End-of-Support for nearly the entire hardware baseline that had been in place for six years. A second requirement was to consolidate the existing infrastructure into a smaller footprint while continuing to meet functionality, performance and security requirements. The new system was required to leverage as much of the existing software as possible, to limit the costs of developing the new software baseline and to utilize existing licenses.

To meet these requirements, the new design was to leverage virtualization of the Solaris 10 operating system on Oracle hardware. To meet networking and storage requirements, the decision was made to use Fibre Channel over Ethernet (FCoE) to reduce the amount of Local Area Network (LAN) and Storage Area Networking (SAN) switch ports, saving the customer millions of dollars in infrastructure costs.

Upon completion of the initial design, a subset of the proposed equipment was purchased and assembled in the RPI test lab. The customer’s software baseline was installed on the test system and initial functionality/performance testing was executed. Issues that were identified were resolved by RPI working closely with the hardware and software vendors.

After testing on the prototype system provided results that met the customer’s criteria, another set of equipment was purchased to scale out the prototype system so that it would more closely resemble the proposed design. Additional testing was performed on the scaled out system to ensure that there were no issues with interoperability of the various server types running on the fully virtualized environment.

Successful testing of the prototypes led the customer to purchase and deploy the equipment for multiple operational sites. The final solution was deployed by standing up the new architecture alongside the legacy systems to allow simultaneous testing with both systems, provide a redundancy capability during system cutover, minimize downtime to mission-critical systems and, after verification and acceptance testing was completed, to remove the legacy systems.

RPI-CS, Inc. has been involved in a critical National Systems Program since 2003. Our contribution to this multibillion-dollar project involves a High Performance Computing (HPC) environment with cutting-edge I/O and availability requirements. Since the program relies heavily on RPI for its storage and computing engineers, we continue to act as a key contributor of integration, system architecture, development and product evaluation services.

RPI provided our expertise in High Availability (HA) design, large capacity/high speed Storage Area Networking (SAN), IP networking and core infrastructure technologies. Working closely with our customer and their customer, we helped design a solution that exceeded the then-current requirements.

It has also proved to be a flexible architecture that allows for a multitude of enhancements and technology refreshments with minimal maintenance windows and without impacting the mission’s availability, which is key to meeting the customer’s award fee criteria.

A longtime customer asked RPI-CS, Inc. to perform an analysis and sizing project to convert one of their ground station programs into an Infrastructure/Application Service Provider (ISP/ASP) model for their customer. The primary goal was not only splitting the existing architecture into the basic support models but also converting all assets running Solaris SPARC to equally performing Red Hat Enterprise Linux platforms running on Intel chips.

We were also tasked with defining and sizing a completely virtualized system (ESXi) and providing a system where all developers in geographically distant locations could develop, test and administer the system using thin/zero clients via Virtual Desktops. All key members of the team were RPI engineers.

Our team gathered all of the necessary information from many elements of the organization, and then began the arduous task of creating a complete and accurate description of the system as it was and how to transition to the system their customer was requesting.

A project of this scope is usually done in a minimum of six months with dozens of engineers. RPI was given 60 days and roughly three full-time people to work on it. The team defined the software, number of user terminals, amount of storage, physical CPU counts, memory per machine and the amount of network bandwidth and latency for each facility. The final product was briefed, in person, to the customer’s customer by RPI’s lead engineer. It is unusual for a customer to allow a subcontractor to brief their customer, but it is a clear testament that our engineers are highly trusted and respected by our customers.

A large agency tasked RPI-CS, Inc. engineers with providing a performance analysis of its data processing Linux cluster, which was ingesting more than a terabyte of scientific data daily. We reviewed the entire data processing architecture and interviewed various end users, allowing us to determine the best areas to target for performance improvement.

Our analysis showed that the processing nodes were using extremely small I/O when accessing the data. This type of I/O doesn’t typically lend itself to traditional Storage Area Networking (SAN) storage configuration. We advised utilizing Network-Attached Storage (NAS) storage, which was Transmission Control Protocol (TCP) based, for their processing rather than their SAN storage. A NAS unit was then installed to store the raw data.

In the new architecture, the processing cluster nodes utilized the NAS storage for data processing and the SAN storage for archiving products. The agency was very pleased with the results, as some end users reported an improvement in processing time of 2X. RPI continues to support the agency and its ever-increasing data ingestion and processing requirements.

A large research and development facility selected RPI-CS, Inc. to provide a Mass Storage System (MSS) for its state-of-the-art facility showcasing energy savings and sustainable energy technologies for large compute environments.

The MSS architecture utilized an Oracle-based Hierarchical Storage Management System (HSM-System) that included Storage Archive Manager-Quick File System (SAM-QFS) archive software, SL3000 tape library, multiple T4 QFS servers, X3 Network File System Linux clients and a 300TB ZFS appliance cluster for storage.

The integration of the ZFS appliance storage was a significant challenge for both RPI and Oracle since the ZFS appliance utilized had recently adopted pool-based storage architecture. This project was the first attempt at implementing a ZFS appliance as SAM-QFS disk cache. RPI and Oracle engineers worked together closely and thoroughly to finely tune the storage configuration, meet the customer’s requirements and gain a wealth of knowledge on ZFS appliance configurations.

A Department of Defense (DoD) customer approached RPI-CS, Inc. to develop a data recorder for a proprietary sensor. The customer was budget-constrained and needed to meet certain developmental milestones in order to obtain additional funding from their sponsors. They also had several failed attempts by other developers who thought they could achieve the program’s requirements but ended up missing performance and schedule deadlines. RPI developed a plan that included several phases to help the customer meet their goals and preserve budget for other necessary items.

Early in the project it was apparent that the original prototyped OS platform, which was provided by another developer, would not meet the customer’s data rate and latency requirements. Additionally, the addressable memory constraints necessitated a change in OS. RPI tested several flavors of Linux and settled on a version that has robust scientific community support and allows for architecture extensibility with high-performance shared file systems. This removed the need to rewrite or port the design as the system progresses from prototype to other phases, which include collaboration, correlation and analysis external of the data recorder.

RPI wrote a Linux device driver and a utility to configure the proprietary Peripheral Component Interconnect Express (PCIe) adapter and trigger direct memory accesses and data capture. During the first demonstration, we removed the provided simulator and inserted the target PCIe card. The driver successfully attached, plus the data transfers succeeded on our first attempt. We were able to demonstrate a sustained data rate of 3GB/sec with minimal gaps (limited by proprietary card design) between Direct Memory Access (DMA) transfers that exceeded the customer’s expectations. The demonstration easily met their requirements, allowing them to continue development into the next phase that will push the design into the 9GB/sec domain.

During the design analysis of a real-time data acquisition project, RPI-CS, Inc. found a major shortcoming. A Commercial Off-The-Shelf (COTS) solution for the failover of Asymmetric RAID devices utilized in a Shared File System Architecture where pre-deterministic performance and path-failure system impact could be defined did not exist.

We diligently worked with the platform vendor and the Host Bus Adapter (HBA) manufacturer and defined a methodology to make the application compliant. Our client was provided a fully qualified HBA driver and toolset to implement a comprehensive solution that achieved the required performance and failure resolution specifications. Due to our extensive collaboration efforts, we were able to meet the critical application requirements.

RPI-CS, Inc. was engaged to perform a high availability cluster implementation at a large agency. This project included implementing Oracle T4-4 servers with a new operating system (Solaris 11.1) on a high availability cluster (Oracle Cluster 4.1) on a new storage platform (Pillar Axiom 600) to provide new redundancy for a legacy accounting system.

Our scope included the provisioning of the Pillar Axiom 600 architecture into multiple storage domains. This element provided separation between the production and test/development environments. We thus provided a clustered environment for the production and test/development systems to allow for high availability failover capabilities. RPI also provided customized onsite introductory training to local cluster administrators, as well as specific cluster awareness information provided to the database administrators.

This proof of concept project is an ongoing collaborative effort with a large system integrator involving both a high-capacity and high-performance storage solution in a classified environment.

The I/O requirements included demonstrating an I/O rate that stretched the limits of today’s storage technologies while also delivering real-time video through the end storage at a low latency — all in a small footprint.

RPI-CS, Inc. contributed our extensive knowledge and seamless implementation of storage architecture, performance analysis, shared file systems, integration and virtualization for a continuously evolving and highly successful solution.

Powerful Design Solutions for Mission-Critical Assignments

REQUEST A CONSULTATION

Questions? Call Us

Our mission is to put the values of our services, products and customers at the center of everything we do. Call us to find out how we help our customers succeed: (866) 938-7775 ext. 1

Request a Consult

Our goal is to create a true business development partnership built on a foundation of excellence and integrity. Contact us for a consultation to better understand our process: [email protected]