CSE Speaker Series – Dr. David Grow

On Friday, March 31 in Cramer 221 at 11:00 am Dr. David Grow of NMT will give a talk on an Overview of Recent Projects in the Robotic Interfaces Lab at NMT

Abstract:

Human-robot interfaces allow exciting new synergies for solving interesting engineering problems.  This talk will provide an overview of current projects with emphasis on current work related to the training of orthopaedic surgeons.  Orthopaedic training is a hands-on, motor skills-demanding surgical specialty. Historically, much of the technical learning occurs in the operating room (OR). This is an effective method of training for simple fractures or low-risk cases. For more complicated operations or high-risk situations such as spinal surgery, hand surgery, surgery with the potential for vascular or nerve impingement, etc., using the OR as an introduction to surgical techniques can introduce unnecessary risk. As of July 2013, prompted by the quality initiative set forth by the Patient Protection and Affordable Care Act, the American Board of Orthopaedic Surgeons (ABOS) and the Residency Review Committee (RRC) for Orthopaedic Surgery have mandated formal motor skills training outside of the OR. The ABOS and RRC believe that this training will enhance residents’ acquisition of basic surgical skills. Currently, no criteria has been established for hands-on training of surgical residents on common procedures used in each orthopaedic surgery subspecialty rotation. Some institutions have implemented qualitative evaluation methods, while others have purchased expensive robots for evaluation. To address this initiative, we have implemented low-cost, sensor-based modifications to common surgical instruments to quantitatively evaluate the surgical skills of orthopaedic residents.

Posted on .


CSE Speaker Series – Dr. Song Fu

Dr. Song Fu from UNT will give a talk on Exploiting Disk Performance Signatures for Cost-Effective Management of Large-Scale Storage Systems in Cramer 221 at 11:00 am.

 

ABSTRACT

The huge number of nodes along with the overwhelming complexity of interactions among system components and highly dynamic application and system behaviors makes management of high-end computers extremely challenging. Occurrences of component failures and critical errors as well as their impact on system performance and operating costs is becoming an increasingly important concern to systems designers and operators. Although hard drives are reliable in general, they are believed to be the most commonly replaced hardware components. It is reported that 78% of all hardware replacements were for hard drives in data centers. Moreover, with the increased capacity of single drives and an entire system, block and sector level failures, such as latent sector errors and silent data corruption, cannot be ignored anymore. Existing disk failure management approaches are mostly reactive (replacement and possible diagnosis only after disk drives have failed) and incur high overhead (disk rebuilds); they do not provide a cost-effective solution to managing large-scale production storage systems. To overcome these problems, we design, develop and evaluate novel disk health analysis and failure prediction technologies and tools for production storage systems. Specifically, we address a series of crucial issues: systematic mechanisms for online disk health probing, failure categorization and modeling to discover types of disk failures and derive disk performance signatures; innovative disk failure prediction techniques for accurate forecast of the occurrence time of disk failures in each discovered failure type by leveraging disk performance signatures in performance degradation tracking; proactive data rescue and preventive disk reliability enhancement with easy-to-use APIs for storage users and developers. This research enables a deep understanding of storage’s health and reliability and a natural, cost-effective support of data protection with a low overhead.

 

Posted on .


CSE Speaker Series – Dr. Jed Crandall

On Friday, March 24, 2017 at 11:00 am in Cramer 221, Dr. Jed Crandall of UNM will give a talk on Collecting “Big Data” to Understand the Impact of Global Internet Censorship and Surveillance.

Abstract:
Censorship and surveillance on the Internet is a global phenomena with
far-reaching and transformative effects on society, yet research on this
phenomena is still very nascent and is limited in scope (e.g., to a single
country or a short timeframe). Important questions go unanswered. For
example, how commonly are support websites made inaccessible to at-risk
populations (such as domestic abuse victims) because they are mis-categorized
as pornography? What role do software and Internet media companies, either
intentionally or unwittingly, play in state surveillance in various parts of
the world? Who decides which keywords trigger censorship or surveillance in
different market segments for different countries? How are the national-scale
firewalls that limit Internet traffic evolving?

Longitudinal datasets that are global in scope are needed to truly understand
the impact and nature of Internet censorship and surveillance, but how do you
collect large data sets about a phenomena that is clouded in secrecy? In this
talk I’ll discuss two research thrusts that my group is pioneering that each
have the potential to scale to truly “big data”.

One research thrust is TCP/IP side channels, where it’s possible to measure
conditions about the Internet between any two points in the world without
having any infrastructure at either point or in between. In other words, using
a single Linux machine here in North America, we can, for example, determine if
an IP address in Zimbabwe can communicate with another IP address in Saudi
Arabia or if a firewall restricts their communications. It sounds like magic,
but I’ll explain how this is made possible through spoofed return IP addresses
and careful monitoring of remote machines’ network stack state. Our goal is to
measure Internet censorship everywhere, all the time.

The second research thrust is reverse engineering. We are collaborating with
the Citizen Lab at the University of Toronto to reverse engineer closed-source
software and reveal its secrets. Some companies implement censorship and
surveillance within their software, while others make claims about privacy and
cryptography that aren’t true and thereby put the communications of
journalists, activists, ethnic minorities, and many others at risk. The large
amount of software that’s out there and is being used by at-risk populations
makes this an essentially “big data” problem.

Posted on .


CSE Speaker Series – Max Planck

On Friday, February 24, in Cramer 221 at 11:00, Max Planck of ICASA will give a talk on DAVE.

“A Problem With Architectures – How Big Data made “easy” makes analytic development hard” – This talk focuses on a specific challenge encountered during the building an analytic development system called the Data Analysis and Visualization Environment (DAVE). The purpose of DAVE is to allow non-programmers to compose data flow based analytic processes against many disparate Big Data sources. The problem is that while any single step in an analytic process can be efficiently mapped to Big Data programming paradigms (i.e Map/Reduce, Bulk-synchronous Parallel, Spark, Kafka, Storm, Columnar/Graph Databases) it appears to be difficult to do the IO between steps in such a way that efficiency is preserved. This talk tries to present the problem, and some initial ideas about how to deal with it.

CSE Speaker Series – Dr. Drew Hamilton

On March 1 at 11:00-12:00 in Cramer 221, Dr. Drew Hamilton from Mississippi State University will discuss “What if a Simulation is Too Good?”

The need for simulation software vulnerability assessment is being driven by three major trends: increased use of modeling and simulation for training and operational planning; increased emphasis on coalition warfare and interoperability and finally increased awareness of the potential security risks inherent in sharing operationally useful software. This presentation will address calculating the parameterization of the simulation, as well as disassembly, decompilation and runtime execution analysis. Additionally, we will discuss training, tactics and procedures that can be gleaned from a high fidelity simulation. This presentation will describe in an unclassified manner the process developed by Dr. Hamilton and the Missile Defense Agency to evaluate potential vulnerabilities in shared simulation software.