The National Institute for Computational Sciences

E&O Opportunities

Subscribe to E&O Opportunities feed
XSEDE
Updated: 2 hours 2 min ago

Patch to Fix Intel-based PCs with Enterprise Bug Rolls Out Next Week

Thu, 05/11/2017 - 16:15

Next week, PC vendors will start rolling out patches that fix a severe vulnerability found in certain Intel-based business systems, including laptops, making them easier to hack. Intel on Friday released a new notice urging clients to take steps to secure their systems. The chipmaker has also released a downloadable tool that can help IT administrators and users discover whether a machine they own has the vulnerability. In addition, vendors including Fujitsu, HP, and Lenovo have released lists showing which products are affected and when the patches will roll out. The products include laptops from Lenovo's ThinkPad line and HP's EliteBook series, along with servers, and desktops. Some of the patches are slated to come in June. Computers running enterprise management features found in Intel-based firmware from the past eight years will have the bug. Specifically, the vulnerability resides in past versions of Intel Active Management Technology, Intel Small Business Technology, and Intel Standard Manageability. Fortunately, the vulnerability can only be exploited if these features have been enabled, according to security firm Embedi, which uncovered the bug. Read more at http://www.pcworld.com/article/3195029/security/patch-to-fix-intel-based-pcs-with-enterprise-bug-rolls-out-next-week.html

Afnan Abdul Rehman 2017-05-11T20:15:03Z

Researchers Unveil New Password Meter that will Change How Users Make Passwords

Thu, 05/11/2017 - 16:14

One of the most popular passwords in 2016 was "qwertyuiop," even though most password meters will tell you how weak that is. The problem is no existing meters offer any good advice to make it better—until now. Researchers from Carnegie Mellon University and the University of Chicago have just unveiled a new, state-of-the-art password meter that offers real-time feedback and advice to help people create better passwords. To evaluate its performance, the team conducted an online study in which they asked 4,509 people to use it to create a password. "Instead of just having a meter say, 'Your password is bad,' we thought it would be useful for the meter to say, 'Here's why it's bad and here's how you could do better,'" says CyLab Security and Privacy Institute faculty Nicolas Christin, a professor in the department of Engineering and Public Policy and the Institute for Software Research at Carnegie Mellon, and a co-author of the study. The study will be presented at this week's CHI 2017 conference in Denver, Colorado, where it will also receive a "Best Paper Award." This data-driven feedback is presented in real-time, as a user is typing their password out letter-by-letter. The team has open-sourced their meter on GitHub. Read more at https://phys.org/news/2017-05-unveil-password-meter-users-passwords.html

Afnan Abdul Rehman 2017-05-11T20:14:18Z

AI Technology from China Helps Radiologists Detect Lung Cancer

Thu, 05/11/2017 - 16:13

Today Infervision introduced its innovative, deep learning solution to help radiologists identify suspicious lesions and nodules in lung cancer patients faster than ever before. The Infervision AI platform is the world’s first to reshape the workflow of radiologists and it is already showing dramatic results at several top hospitals in China. Infervision’s AI-aided CT diagnosis, with its high-performance parallel computing power, effectively learns the core characteristics of lung cancer and efficiently detects suspected cancer features in different CT image sequences, helping with early diagnosis and consequently early treatment. The technology is also used to assist in X-ray diagnosis and has achieved extremely high accuracy so that it is close to that of a deputy chief physician in the diagnosis of cardiothoracic diseases at one of the top Chinese hospitals where the software is now in use. For the past six months, the technology has been in use at several top hospitals in China, a country experiencing hundreds of thousands of new lung cancer patients annually, while it has too few radiologists. After rigorous testing and integrating the software with the standard PACS (picture archiving and communication system), Infervision’s technology is proving to be extremely effective and is enhancing the work of Chinese doctors by acting as a second pair of eyes. Read more at http://insidehpc.com/2017/05/ai-technology-china-helps-radiologists-detect-lung-cancer/

Afnan Abdul Rehman 2017-05-11T20:13:27Z

NCSA Highlights Scientific Impacts from Three Years of Blue Waters

Thu, 05/11/2017 - 16:12

“Build it and they will come” is one way to approach building a supercomputer, but it’s not what the National Center for Supercomputing Applications (NCSA) did with Blue Waters, the largest leadership-class National Science Foundation supercomputer. Prior to the system going online in April 2013, Blue Waters staff worked with more than 20 science teams to determine a unique, balanced hardware configuration—a process now known as co-design. Three years later, a sample of 31 science teams that have used Blue Waters were surveyed and interviewed as part of a report meant to judge the effectiveness and productivity of this unique system—housed at NCSA’s home institution, the University of Illinois at Urbana-Champaign. Using information gathered in the surveys, the report’s authors at International Data Corporation’s HPC division (now known as Hyperion Research) ranked the impact of each team’s findings into an “innovation index”—using a methodology they developed to analyze the effectiveness of 700-plus scientific projects, including international HPC projects. The Hyperion Research analysts noted in the report that “NCSA did an unusually thorough job of preparing [science teams] for Blue Waters.” Read more at https://www.hpcwire.com/off-the-wire/ncsa-highlights-scientific-impacts-three-years-blue-waters/

Afnan Abdul Rehman 2017-05-11T20:12:29Z

Sixteen Teams to Compete in SC17 Student Cluster Competition

Thu, 05/11/2017 - 16:11

Today SC17 announced that 16 teams will take part in the Student Cluster Competition. Hailing from across the U.S., as well as Asia and Europe, the student teams will race to build HPC clusters and run a full suite of applications in the space of just a few days. The Student Cluster Competition is a high energy event featuring young supercomputing talent from around the world competing to build and operate powerful cluster computers. Created as an opportunity to showcase student expertise in a friendly yet spirited competition, the SCC aims to introduce the next generation of students to the high-performance computing community. The number of teams selected for SC17 is the highest since the first event was held at SC07 in Reno, and the 19 reviewers sifted through 20 team applications, also the highest number to date. The SC17 competition will also mark the first time that the winners of the world’s top three student cluster competitions will meet: SC16 winner University of Science and Technology of China; Tsinghua University of China, winner of the ASC17 Student Supercomputer Challenge; and the winner of the upcoming ISC Student Cluster Competition to be held in June. Learn more at http://insidehpc.com/2017/05/sixteen-teams-compete-sc17-student-cluster-competition/

Afnan Abdul Rehman 2017-05-11T20:11:42Z

NASA Issues a Challenge to Speed Up Its ‘FUN3D’ Supercomputer Code

Thu, 05/11/2017 - 16:10

Do you, or someone you know, know how to program computers? NASA has a challenging assignment for you. NASA’s aeronautical innovators are sponsoring a competition to reward qualified contenders who can manipulate the agency’s FUN3D design software so it runs ten to 10,000 times faster on the Pleiades supercomputer without any decrease in accuracy. The competition is called the High Performance Fast Computing Challenge (HPFCC). “This is the ultimate ‘geek’ dream assignment,” said Doug Rohn, director of NASA’s Transformative Aeronautics Concepts Program (TACP). “Helping NASA speed up its software to help advance our aviation research is a win-win for all.” NASA’s aviation research is based on what is often described as a three-legged stool. One leg sees initial designs tested with computational fluid dynamics, or CFD, which relies on a supercomputer for numerical analysis and data structures to solve and analyze problems. Learn more at https://www.hpcwire.com/off-the-wire/nasa-issues-challenge-speed-fun3d-supercomputer-code/

Afnan Abdul Rehman 2017-05-11T20:10:53Z

Reaching for the Stormy Cloud with Chameleon

Thu, 05/11/2017 - 16:10

Some scientists dream about big data. The dream bridges two divided realms. One realm holds lofty peaks of number-crunching scientific computation. Endless waves of big data analysis line the other realm. A deep chasm separates the two. Discoveries await those who cross these estranged lands. Unfortunately, data cannot move seamlessly between Hadoop (HDFS) and parallel file systems (PFS). Scientists who want to take advantage of the big data analytics available on Hadoop must copy data from parallel file systems. That can slow workflows to a crawl, especially those with terabytes of data. Computer Scientists working in Xian-He Sun's group are bridging the file system gap with a cross-platform Hadoop reader called PortHadoop, short for portable Hadoop. "PortHadoop, the system we developed, moves the data directly from the parallel file system to Hadoop's memory instead of copying from disk to disk," said Xian-He Sun, Distinguished Professor of Computer Science at the Illinois Institute of Technology. Sun's PortHadoop research was funded by the National Science Foundation and the NASA Advanced Information Systems Technology Program (AIST). The concept of 'virtual blocks' helps bridge the two systems by mapping data from parallel file systems directly into Hadoop memory, creating a virtual HDFS environment. These 'virtual blocks' reside in the centralized namespace in HDFS NameNode. The HDFS MapReduce application cannot see the 'virtual blocks'; a map task triggers the MPI file read procedure and fetches the data from the remote PFS before its Mapper function processes its data. In other words, a dexterous slight-of-hand from PortHadoop tricks the HDFS to skip the costly I/O operations and data replications it usually expects.Learn more at https://www.tacc.utexas.edu/-/reaching-for-the-stormy-cloud-with-chameleon

Afnan Abdul Rehman 2017-05-11T20:10:03Z

SDSC to Double ‘Comet’ Supercomputer’s Graphic Processor Count

Thu, 05/11/2017 - 16:09

The San Diego Supercomputer Center (SDSC) at the University of California San Diego has been granted a supplemental award from the National Science Foundation (NSF) to double the number of graphic processing units, or GPUs, on its petascale-level Comet supercomputer in direct response to growing demand for GPU computing across a wide range of research domains. Under the supplemental NSF award, valued at just over $900,000, SDSC is expanding the high-performance computing resource with the addition of 36 GPU nodes, each with four NVIDIA P100s, for a total of 144 GPUs. This will double the number of GPUs on Comet from the current 144 to 288. The nodes are being provided by Dell, the vendor and co-design partner for Comet. They are expected to be in production by early July. The expansion will make Comet the largest provider of GPU resources available to the NSF-funded Extreme Science and Engineering Discovery Environment (XSEDE), a national partnership of institutions that provides academic researchers with the most advanced collection of digital resources and services in the world. Prior to this award, the NSF granted SDSC a total of $24 million to develop and operate Comet, which went into production in mid-2015. Once used primarily for video game display graphics, today’s much more powerful GPUs have been developed that have more accuracy, speed, and accessible memory for more scientific applications that range from phylogenetics and molecular dynamics to creating some the most detailed seismic simulations ever made to better predict ground motions to save lives and minimize property damage. Learn more at http://www.sdsc.edu/News%20Items/PR20170502_Comet_GPU.html

Afnan Abdul Rehman 2017-05-11T20:09:12Z

MapD Open Sources High-Speed GPU-Powered Database

Thu, 05/11/2017 - 16:08

Today MapD Technologies released the MapD Core database to the open source community under the Apache 2 license, seeding a new generation of data applications. “Open source is sparking innovation for data science and analytics developers,” said Greg Papadopoulos, venture partner at New Enterprise Associates (NEA). “An open-source GPU-powered SQL database will make entirely new applications possible, especially in machine learning where GPUs have had such an enormous impact. We’re incredibly proud to partner with the MapD team as they take this pivotal step.” MapD pioneered the use of graphics processing units (GPUs) to analyze multi-billion-row datasets in milliseconds, orders-of-magnitude faster than traditional CPU-based systems. By open sourcing the MapD Core database and associated visualization libraries, MapD is making the world’s fastest analytics platform available to everyone. MapD has also released a free Community Edition of its software for non-commercial development and academic use. The MapD Community Edition includes the MapD Core database and MapD Immerse visual analytics client.Learn more at http://insidehpc.com/2017/05/mapd-open-sources-high-speed-gpu-powered-database/

Afnan Abdul Rehman 2017-05-11T20:08:35Z

Intel Consolidates Xeon Product Lines

Thu, 05/11/2017 - 16:06

The much-anticipated unification of Intel’s Xeon E3, E5 and E7 product lines is upon us. Starting with the upcoming “Skylake” datacenter chips, all Xeons will now fall under the new Xeon Scalable Processor (SP) family. As a result, the Xeon EP, and EX suffixes are being consolidated into a single SP designation. However, under the new structure, Intel has come up with four processor levels: Platinum, Gold, Silver, and Bronze. It’s anticipated that the old E3 (single-socket), E5 (dual-socket) and E7 (quad-socket/multi-socket) will map to the Silver, Gold and Platinum levels, respectively, but since four levels need to be filled, it’s likely to get a bit more nuanced than that. In any case, Intel is using the announcement of the new family to tee up the Skylake silicon, which according to an editorial penned by Intel VP and GM Lisa Spelman, represents “the biggest set of data center platform advancements in this decade.” Skylake will include a number of integrated “performance accelerators,” namely the 512-bit Advanced Vector Extensions (AVX-512), QuickAssist Technology (QAT), and the Volume Management Device (VMD). Not all Skylake SP chips will include all these accelerators. Rather, they will be distributed across the various levels and individual SKUs based on their anticipated target audience. Learn more at https://www.top500.org/news/intel-consolidates-xeon-product-lines/

Afnan Abdul Rehman 2017-05-11T20:06:55Z

Supermicro Systems Deliver 170 TFLOPS FP16 of Peak Performance for AI at GTC

Thu, 05/11/2017 - 16:05

GPU Technology Conference – Super Micro Computer, Inc., a global leader in compute, storage and networking technologies including green computing, will exhibit new GPU-based servers at the GPU Technology Conference (GTC) from May 8 to 11 at the San Jose Convention Center, Booth #111. Optimized applications for Supermicro GPU supercomputing systems include Machine Learning, Artificial Intelligence, HPC, Cloud and Virtualized graphics, and Hyperscale Workloads. Supermicro will have on display the SYS-1028GQ-TXRT and SYS-4028GR-TXRT with support for four and eight NVIDIA Tesla P100 SXM 2.0 modules, respectively, both featuring NVIDIA NVLink™ interconnect technology. Supermicro will also be displaying its multi-node GPU solutions and high-performance workstations with support for 4 PCIe 3.0 x16 slots. NVIDIA’s GPU computing platform provides a dramatic boost in application throughput for HPC, advanced analytics and AI workloads,” said Paresh Kharya, Tesla Product Management Lead at NVIDIA. With our Tesla data center GPUs, Supermicro’s new high-density servers offer customers high performance and superior efficiency to address their most demanding computing challenges.” Learn more at https://www.hpcwire.com/off-the-wire/supermicro-systems-deliver-170-tflops-fp16-peak-performance-ai-gtc/

Afnan Abdul Rehman 2017-05-11T20:05:44Z

Study Finds Gender Bias in Open-Source Programming

Thu, 05/04/2017 - 15:21

A study comparing acceptance rates of contributions from men and women in an open-source software community finds that, overall, women's contributions tend to be accepted more often than men's - but when a woman's gender is identifiable, they are rejected more often. "There are a number of questions and concerns related to gender bias in computer programming, but this project was focused on one specific research question: To what extent does gender bias exist when pull requests are judged on GitHub?" says Emerson Murphy-Hill, corresponding author of a paper on the study and an associate professor of computer science at North Carolina State University. GitHub is an online programming community that fosters collaboration on open-source software projects. When people identify ways to improve code on a given project, they submit a "pull request." Those pull requests are then approved or denied by "insiders," the programmers who are responsible for overseeing the project. Read more at https://phys.org/news/2017-05-gender-bias-open-source.html

Afnan Abdul Rehman 2017-05-04T19:21:23Z

Supercomputers assist in search for new, better cancer drugs

Thu, 05/04/2017 - 15:20

Surgery and radiation remove, kill, or damage cancer cells in a certain area. But chemotherapy—which uses medicines or drugs to treat cancer—can work throughout the whole body, killing cancer cells that have spread far from the original tumor. Finding new drugs that can more effectively kill cancer cells or disrupt the growth of tumors is one way to improve survival rates for ailing patients. Increasingly, researchers looking to uncover and test new drugs use powerful supercomputers like those developed and deployed by the Texas Advanced Computing Center (TACC). "Advanced computing is a cornerstone of drug design and the theoretical testing of drugs," said Matt Vaughn, TACC's Director of Life Science Computing. "The sheer number of potential combinations that can be screened in parallel before you ever go in the laboratory makes resources like those at TACC invaluable for cancer research." Three projects powered by TACC supercomputer, which use virtual screening, molecular modeling and evolutionary analyses, respectively, to explore chemotherapeutic compounds, exemplify the type of cancer research advanced computing enables. Read more at https://phys.org/news/2017-05-supercomputers-cancer-drugs.html

Afnan Abdul Rehman 2017-05-04T19:20:34Z

IBM "Minsky" Cluster Achieves Billion-Cell Reservoir Simulation in Record Time

Thu, 05/04/2017 - 15:19

A 30-node cluster of GPU-accelerated IBM “Minsky” servers has performed a billion-cell petroleum reservoir simulation in 92 minutes. Previous attempts at this scale required 20 hours using a large supercomputer with thousands of processors. Oil and gas companies rely on reservoir simulations to help figure out the most productive spots to drill. But since these simulations are so computationally demanding, typical runs use 10 to 100 million cells, although Saudi Aramco managed to achieve a trillion-cell reservoir simulation last year using its Cray XC 40 supercomputer. Nonetheless, a billion-cell simulation is rarely practical in most HPC setups at oil and gas companies. However, the level of resolution provided by such a detailed model can help lower production costs and environmental risks, allowing drillers to extract more oil from a given reservoir. The billion-cell run took advantage of the latest P100 Tesla GPUs from NVIDIA. To make the most of these devices, the simulation was performed using ECHELON, a petroleum reservoir simulation package from Stone Ridge Technology that is highly optimized for GPU acceleration. Stone Ridge Technology president Vincent Natoli noted that the fast turnaround time now possible for such a detailed simulation will enable reservoir engineers to run more models, making oil drilling and production more efficient. “By increasing compute performance and efficiency by more than an order of magnitude, we’re democratizing HPC for the reservoir simulation community,” said Natoli. Read more at https://www.top500.org/news/ibm-minsky-cluster-achieves-billion-cell-reservoir-simulation-in-record-time/

Afnan Abdul Rehman 2017-05-04T19:19:24Z

Students and Scholars Named in First National CyberGIS Summer School

Thu, 05/04/2017 - 15:18

The CyberGIS Center for Advanced Digital and Spatial Studies at the University of Illinois at Urbana-Champaign's National Center for Supercomputing Applications (NCSA) is hosting a Summer School for graduate and early-career scholars to gain cutting-edge knowledge about scientific problem solving enabled by CyberGIS and geospatial data science. Thirty-six students from around the country are slated to participate. In May 2017, the University Consortium for Geographic Information Science (UCGIS) will launch its first Summer School for graduate students and early-career scholars around the theme of Collaborative Problem Solving with CyberGIS and Geospatial Data Science. Participants will develop novel solutions to a set of motivating and significant scientific problems as they learn about cutting-edge scientific advances and technical capabilities of cyberGIS (e.g. ROGER). These problems focus on three general interdisciplinary areas: emergency management; smart and connected communities; and the nexus of food, energy and water systems. The Summer School is designed to provide professional interactions that are key to addressing complex geospatial problems and related big data challenges. Participants will also prepare their reports and present their posters at the subsequent UCGIS Symposium 2017 in Arlington, Virginia. Learn more at http://www.ncsa.illinois.edu/news/story/students_named_in_first_gis_summer_school

Afnan Abdul Rehman 2017-05-04T19:18:14Z

Calling High School Students for UC San Diego’s Mentor Assistance Program

Thu, 05/04/2017 - 15:17

San Diego-area high school students interested in pursuing a career in scientifically-based research are invited to apply to UC San Diego’s Mentor Assistance Program (MAP), a campus-wide initiative designed to engage students in a mentoring relationship with an expert from a vast array of disciplines. Launched almost two years ago by the university’s San Diego Supercomputer Center (SDSC) and UC San Diego School of Medicine, MAP’s mission is to provide a pathway for student researchers to gain access to UC San Diego faculty, post-doctoral fellows, Ph.D. candidates, and staff to mentor them in their own field of interest. Mentors are being recruited from across campus from fields that include athletics, biology, chemistry, aerospace engineering, network architectures, pharmaceutical sciences, physics, social studies, and more. View a full list on the MAP website. Learn more at http://www.sdsc.edu/News%20Items/PR20170418_MAP.html

Afnan Abdul Rehman 2017-05-04T19:17:29Z

MIT Professor Runs Record Google Compute Engine job with 220K Cores

Thu, 05/04/2017 - 15:15

Over at the Google Blog, Alex Barrett writes that an MIT math professor recently broke the record for the largest-ever Compute Engine cluster, with 220,000 cores on Preemptible VMs. According to Google, this is the largest known HPC cluster to ever run in the public cloud. “Andrew V. Sutherland is a computational number theorist and Principal Research Scientist at MIT, and is using Compute Engine to explore generalizations of the Sato-Tate Conjecture and the conjecture of Birch and Swinnerton-Dyer to curves of higher genus. In his latest run, he explored 1017 hyperelliptic curves of genus 3 in an effort to find curves whose L-functions can be easily computed, and which have potentially interesting Sato-Tate distributions. This yielded about 70,000 curves of interest, each of which will eventually have its own entry in the L-functions and Modular Forms Database (LMFDB).” The flexibility of the Google Compute Engine was key for Sutherland in his quest to explore generalizations of the Sato-Tate Conjecture and the conjecture of Birch and Swinnerton-Dyer to curves of higher genus. In his latest run, he explored 1017 hyperelliptic curves of genus 3 in an effort to find curves whose L-functions can be easily computed, and which have potentially interesting Sato-Tate distributions. This yielded about 70,000 curves of interest, each of which will eventually have its own entry in the L-functions and Modular Forms Database (LMFDB). Learn more at http://insidehpc.com/2017/04/mit-professor-runs-record-google-compute-engine-job-220k-cores/

Afnan Abdul Rehman 2017-05-04T19:15:55Z

Call for Papers: High Performance Computing for Big Data

Thu, 05/04/2017 - 15:14

4th International Workshop on High Performance Computing for Big Data (HPC4BD 2017) to be held in conjunction with the 46th International Conference on Parallel Processing (ICPP), August 14, 2017 at Bristol, United Kingdom. Scope and Topics of Interest: Processing large datasets for extracting information and knowledge has always been a fundamental problem. Today this problem is further exacerbated, as the data a researcher or a company needs to cope with can be immense in terms of volume, distributed in terms of location, and unstructured in terms of format. Recent advances in computer hardware and storage technologies have allowed us to gather, store, and analyze such large-scale data. However, without scalable and cost effective algorithms that utilize the resources in an efficient way, neither the resources nor the data itself can serve to science and society at its full potential. Read more at http://people.sabanciuniv.edu/kaya/hpc4bd/hpc4bd.html

Afnan Abdul Rehman 2017-05-04T19:14:22Z

Call for Papers: 7th Workshop on Language-Based Parallel Programming Models (WLPP 2017) - Deadline: May 5, 2017

Thu, 05/04/2017 - 15:12

WLPP 2017 is a full-day workshop to be held at the PPAM 2017 focusing on high level programming for large-scale parallel systems and multicore processors, with special emphasis on component architectures and models. Its goal is to bring together researchers working in the areas of applications, computational models, language design, compilers, system architecture, and programming tools to discuss new developments in programming Clouds and parallel systems. The workshop focuses on any language-based parallel programming model such as OpenMP, Python, Intel TBB, Microsoft .NET 4.0 parallel extensions (TPL and PPL), Java parallel extensions, PGAS languages, Unified Parallel C (UPC), Co-Array FORTRAN (CAF) and GPGPU language-based programming models such as CUDA, OpenCL and OpenACC. Contributions on other high-level programming models and supportive environments for parallel and distributed systems are equally welcome. This workshop will feature papers that explore experiences from application developers in the use of the language and performance of real applications, experiences in the implementation of tools supporting the development and parallelization of applications or supporting the final execution on different computing platforms. We also welcome experiences in moving ideas and concepts from one programming model to another. Read more at http://wlpp17.weebly.com/

Afnan Abdul Rehman 2017-05-04T19:12:57Z

TACC Announces Advanced Computing and HPC Services Partnership with NASA JPL

Thu, 05/04/2017 - 15:11

The Texas Advanced Computing Center (TACC) today announced the formation of a five-year advanced computing partnership with NASA's Jet Propulsion Laboratory for up to $3.1 million. Under the contract, TACC will provide resources, including supercomputers, networks, storage, and expertise in applying computational methods to fundamental research, science and engineering challenges. Advanced computing is foundational to the success of a wide range of modern science and engineering efforts relevant to JPL, from design of next generation flight systems to processing real-time data streams from large scale instruments, like telescopes. JPL is the leading U.S. center for robotic exploration of the solar system, and has 19 spacecraft and 10 major instruments carrying out planetary, Earth science, and space-based astronomy missions. JPL's requirements for specialized and increasingly more capable HPC resources grows every year. The partnership will significantly extend JPL's existing computational services and expertise with resources at TACC. Services to be offered to JPL include HPC capabilities and scientific, technical, and operational consulting services. "We believe this is an interesting space where we have something to contribute and JPL has things to teach us as well," said John West, director of Strategic Initiatives at TACC. "We always look for opportunities that are high on the intellectual engagement scale. It makes us a better organization and better at serving our customers." Learn more at https://www.tacc.utexas.edu/-/tacc-announces-advanced-computing-and-hpc-services-partnership-with-nasa-jpl

Afnan Abdul Rehman 2017-05-04T19:11:59Z

Pages