For the purposes of submitting an amicus brief to the US Supreme Court, the Puerto Rico Association of Criminal Defense Lawyers (PRACDL) compiled several indictments and docket sheets from the PACER system. Data from these documents were extracted and analyzed with sociodemographic data from the US Census. Nevertheless, there is still an opportunity to continue to analyze the remaining data to present a visual representation of not only the type of cases seen in this court but also the length of time that the case is "open", the percentage of persons represented by a court-appointed attorney, the average length of sentences, the number of persons granted bail, the number of persons with bail violations and the reasons for those violations, among others. An understanding of these data will facilitate related future social justice projects in this jurisdiction.
High Performance Computing vs Quantum Computing for Neural Networks supporting Artificial Intelligence
A personalized learning system that adapts to learners' interests, needs, prior knowledge, and available resources is possible with artificial intelligence (AI) that utilizes natural language processing in neural networks. These deep learning neural networks can run on high performance computers (HPC) or on quantum computers (QC). Both HPC and QC are emergent technologies. Understanding both systems well enough to select which is more effective for a deep learning AI program, and show that understanding through example, is the ultimate goal of this project. The entry to learning technologies such as HPC and QC is narrow at present because it relies on classical education methods and mentoring. The gap between the knowledge workers needed, which is in high demand, and those with the expertise to teach, which is being achieved at a much slower rate, is widening. Here, an AI cognitive agent, trained via deep learning neural networks, can help in emergent technology subjects by assisting the instructor-learner pair with adaptive wisdom. We are building the foundations for this AI cognitive agent in this project.
The role of the student facilitator will involve optimizing a deep learning neural network, comparing and contrasting with the newest technologies, such as a quantum computer (and/or a quantum computer simulator) and a high performance computer and showing the efficiency of the different computing approaches. The student facilitator will perform these tasks at the rate described in the proposal. Milestone work will be displayed and shared publicly via posting to the Jupyter Notebooks on Google Colab and linked to regular Github uploads.
Machine failure and downtime was considerably low for less sophisticated machines developed during the first two industrial revolutions. Modern manufacturing facilities use highly complex and advance machines that require continuous health monitoring systems. Bearings are widely used in rotating equipment and machines to support load and to reduce friction. The presence of micron sized defects on the mating surfaces of the bearing components can lead to failure through a passage of time. Bearing health can be monitored by analyzing vibration signals acquired using an accelerometer and developing a machine learning framework for feature extraction and classification of the bearing conditions. The large size defects on bearing elements can be detected/identified by time domain and frequency domain analysis of its vibration signals. However, it becomes difficult to detect local bearing defects at their initial stage either due to their smaller size or presence of noise. In the proposed project, detection of local defects like crack and pits on bearing races will be carried out using machine learning. As a pilot project, simulated data of bearing conditions will be generated from MATLAB Simulink models and used for developing machine learning based predictive maintenance and condition monitoring algorithms. The trained model will be evaluated against the real bearing data and ground truth results. The project will be first implemented on a local machine and once successfully developed, will be ported to a cluster.
The machine learning frame work will include functions for exploring, extracting, and ranking features using data-based and model-based techniques, including statistical, spectral, and time-series analysis. The health of bearings will be monitored by extracting features from vibration data using frequency and time-frequency methods. A student will learn how to organize and analyze sensor data imported from local files, cloud storage, and distributed file systems. The student will learn the complete machine learning project pipeline from data importing, filtering, feature extraction, data distribution, training, validation and testing of multiple machine learning algorithms and working with the clusters. The developed machine learning pipeline will be shared with the research community and the work will be published in a conference proceeding. The project requires MATLAB toolboxes for signal processing, machine learning, predictive maintenance, statistical analysis and deep learning. The future work of the project includes a large datasets of real bearing data and simulated data for predictive maintenance of the bearing using cluster-based machine learning framework. The estimated defect sizes will be predicted, compared and validated through measured actual crack width or pit diameter.
|Jetstream-2||Jetstream2 is a transformative update to the NSF’s science and engineering cloud infrastructure and provides 8 petaFLOPS of supercomputing power to simplify data analysis, boost discovery, and…||cloud-open-source, cloud-storage, openstack, ai, machine-learning, tensorflow, science gateway, gpu, nvidia, cuda, jupyterhub, matlab, vnc, containers, singularity|