What Government Agencies Can Learn from Silicon Valley

golden gate bridge

There is growing interest in some corners of the government to be more like Silicon Valley when acquiring and developing new computer systems. A recent New York Times article noted the high-level interest former Google CEO Eric Schmidt is getting to revamp America’s defense forces with more engineers, more software, and more artificial intelligence (AI).

The gist of Schmidt’s argument is that the government (and private industry for that matter) could get higher performance without proprietary restraints by applying Silicon Valley expertise when replacing and updating computer systems.

Many point to the benefits of using this approach. For more than a decade, Amazon, Facebook, and Google have built their data centers from scratch with non-proprietary solutions. They developed their own server hardware, stripping out all unnecessary features and functions. The goal has been and is to use systems that deliver the best performance per dollar.

These companies and others have taken a similar approach to software. They opted for open-source solutions or developed the software in-house. Certainly, the government and private businesses do not want to undertake expansive software development projects. Fortunately, much of the work done by these companies and others is available via open-source programs, including numerous projects from Amazon, Facebook, and Google.

In addition to traditional HPC hardware and software, Schmidt and others rightly point out how critical the adoption of AI is to improve services, retain competitive advantages, and deliver innovation. Mandates for AI start at the top with last year’s Executive Order announcing the American AI Initiative — the United States’ national strategy on artificial intelligence.

Here again, the big technology companies are opting for more cost-effective computing hardware and open-source software solutions rather than buying proprietary platforms. For example, Amazon, Facebook, and Google are building their AI solutions on the best available hardware, while also developing their own AI acceleration chips.

Why Wait?

Many high-level government initiatives will take years to develop new HPC purchasing strategies and have that reflected in purchasing mandates. That should not prevent government entities from adopting the approach on their own. This is an area where PSSC Labs can help.

We offer HPC, Big Data an AI systems using best of breed technology. Components, sub-systems, and software choices are made based on cost/performance factors, not because it is a particular vendor’s brand.

Using this approach, PSSC Labs systems are ideal for HPC applications and AI offering high performance, scalability, and lower total cost of ownership (TCO).  In additionl this prevents vendor “lock-in” which significantly increase cost while limiting growth options.

An example is our PowerWulf ZXR1+ HPC Clusters, which are application-optimized, scalable, and delivered production-ready. We’ve deployed more than 2,000 PowerWulf ZXR1+ HPC Clusters to a wide range of companies across 36 countries including many to government agencies such as NASA, US DOD, NIH, USDA, NOAA. They are used to support work in a wide range of applications, from bioinformatics to weather modeling. Systems are developed using elements selected to deliver the highest performance at cost for an organization’s specific compute requirements. Compared to a single vendor proprietary solution, the PowerWulf HPC clusters use best in breed components including:

  • Intel® and AMD® CPUs
  • Nvidia® and AMD® GPUs
  • Intel Omnipath® and Mellanox® High Speed Network Interconnects
  • High Performance Flash and NVME based Parallel Storage

Given the volumes of data that are now routinely analyzed in HPC and mainstream applications, storage becomes critical in any application. There also are PSSC Labs storage systems that take the same best performance for cost approach. For example, our Parallux High Performance Storage Clusters are cost-effective and highly-scalable to meet today’s HPC application storage requirements. They scale to tens of petabytes of capacity. They deliver extreme performance exceeding 20GB/sec sustained IO. And they are compatible with POSIX file systems, allowing organizations to leverage the rich POSIX ecosystem of utilities and tools.

Similar to the PowerWulf ZXR1+ HPC Clusters, Parallux High Performance Storage Clusters use best of breed elements to deliver high performance with costs in mind. The solutions are customized based on an organization’s workloads. Elements can include:

  • High Performance Flash and NVME Storage Media or Traditional Spinning Hard Drives
  • Intel Omnipath® and Mellanox® High Speed Network Interconnects
  • Ceph, Gluster Parallel File Systems

All PSSC Labs solutions are tightly integrated and optimized. They are delivered as turnkey solutions with easy to use management systems.

The bottom line: Instead of waiting years for high-level mandates that dictate the use of systems that maximize performance at a lower TCO, there are options available today. As such, government departments, groups, and entities using such systems can embrace Silicon Valley methodologies and reap the benefits enabling new and better services, while being more responsive.

  • Designed and engineered in California. Sold and supported worldwide.

    We are honored to work with the world’s most innovative companies solving the problems that will shape tomorrow.  Please connect with us to see how we can engineer a solution just for you.

  • This field is for validation purposes and should be left unchanged.