High Performance Computing vs Quantum Computing for Neural Networks supporting Artificial Intelligence
Pace University

A personalized learning system that adapts to learners' interests, needs, prior knowledge, and available resources is possible with artificial intelligence (AI) that utilizes natural language processing in neural networks. These deep learning neural networks can run on high performance computers (HPC) or on quantum computers (QC). Both HPC and QC are emergent technologies. Understanding both systems well enough to select which is more effective for a deep learning AI program, and show that understanding through example, is the ultimate goal of this project. The entry to learning technologies such as HPC and QC is narrow at present because it relies on classical education methods and mentoring. The gap between the knowledge workers needed, which is in high demand, and those with the expertise to teach, which is being achieved at a much slower rate, is widening. Here, an AI cognitive agent, trained via deep learning neural networks, can help in emergent technology subjects by assisting the instructor-learner pair with adaptive wisdom. We are building the foundations for this AI cognitive agent in this project.

The role of the student facilitator will involve optimizing a deep learning neural network, comparing and contrasting with the newest technologies, such as a quantum computer (and/or a quantum computer simulator) and a high performance computer and showing the efficiency of the different computing approaches. The student facilitator will perform these tasks at the rate described in the proposal. Milestone work will be displayed and shared publicly via posting to the Jupyter Notebooks on Google Colab and linked to regular Github uploads.

Big Data Portal for Sharing Real-world Bioinformatics Data Sets to the Public Domain
University of Maine, Augusta

This project aims to facilitate the sharing of large data sets for research and education across Maine as well as across the Open Storage Network. It is the intention of Mount Desert Island Biological Laboratories (MDIBL) to make data files and metadata publicly available in exchange for free access. This data is of interest and value to Data Science faculty at the University of Maine Augusta, for teaching and research as part of a system-wide data science degree.

The project requires the development of a front-end and back-end system, preferably developed in Go and deployed in a container, preferably Docker. The end result will allow uploading, downloading, metadata tagging, and HPC job submissions that use the data.

Deep Learning High-Resolution Land Cover Mapping for Vermont
University of Vermont

Executive Summary

Funding is requested from the Northeast CyberTeam to support an undergraduate intern who will help advance remote sensing deep learning workflows supporting Vermont’s high-resolution land cover initiative. The internship will be based out of the University of Vermont Spatial Analysis Laboratory (SAL) and supervised by SAL Director and faculty member Jarlath O’Neil-Dunne. This internship will make extensive use of the Vermont Advanced Computing Core (VACC), particularly the DeepGreen GPU cluster.


The State of Vermont is under both regulatory and public pressure to improve the water quality of Lake Champlain. State agencies must have access to high-resolution land cover information that is detailed enough to provide parcel-level quantification of land cover features. The University of Vermont, with funding from the State of Vermont, led the development of the 2016 statewide, high-resolution land cover dataset. This 2016 land cover dataset is the most accurate, detailed, and comprehensive land cover map ever made of Vermont. The existing workflows employed to develop this land cover dataset are slow and expensive, running on individual desktop computer workstations. Moreover, the land cover dataset was already out of date the moment it was produced.

In February 2020, a meeting was held consisting of the state agency representatives, the Vermont Advanced Computing Core, and the Spatial Analysis Laboratory. State agencies voiced their desire to have an approach to land cover mapping that would allow for more rapid updates of high-resolution land cover products, and that would capture fine-scale changes that could influence water quality, such as the construction of a new building.


This project will focus on integrating deep learning approaches into the SAL’s feature extraction workflows. Deep learning has shown tremendous potential for mapping land cover from high-resolution remotely sensed datasets. Deep learning techniques by themselves may not always be optimal for updating existing land cover datasets as false change can result in differences stemming from the source data or errors in the mapping itself. We propose to leverage deep learning to more efficiently update the Sate’s high-resolution land cover maps through a hybrid approach. Our desire is to take advantage of the potential that deep learning offers while still employing the methodologies that ensure quality specifications are met. The goal of this hybrid approach is to have a faster, more efficient, and more accurate approach to updating existing high-resolution land cover products. High-performance computing will be employed to tackle the most computationally intensive aspects of deep learning, the model training process. These models will then be integrated into the existing workflows to produce areas showing areas of change, and the existing high-resolution land cover to enable rapid updating of the statewide landcover data set. This project will leverage the University of Vermont’s recent investments in high-performance computing architecture. Deep Green, an NSF-funded supercomputer, will be employed.

The phases for this project are: 1) deep learning system design, 2) deep learning system development, 3) deep learning system implementation, 4) integration of deep learning into object-based feature extraction workflow, 5) production of an updated statewide land cover map. The software technologies employed will include TensorFlow and eCognition for feature extraction and ArcGIS for visualization.

This project is incredibly valuable to the state of Vermont as the State is struggling to meet regulatory requirements to reduce non-point source pollution to Lake Champlain, the state’s largest lake that extends into New York and Quebec. Access to current, accurate high-resolution land cover is imperative if the State is going to make decisions on how to reduce non-point source pollution best and fund these activities. Furthermore, the State has no dedicated remote sensing scientists on staff and lacks the computing and technical resources to carry out land cover mapping on this scale. The intern funded as part of this project will work with a talented team that consists of individuals who are internationally recognized for their expertise in automated feature extraction.


Affinity Groups

Name Description Tags Join
DARWIN ACCESS Affinity Group logo DARWIN DARWIN (Delaware Advanced Research Workforce and Innovation Network) is a big data and high performance computing system designed to catalyze Delaware research and education funded by a $1.4 million… big-data
Four people surround a giant disk Large Data Sets For people who evaluate or use storage options for researchers with large data sets.  cloud-storage, big-data, data-transfer, open-storage-network, s3, ceph, hpc-storage