The phrase Artificial Intelligence (AI) generally conjures sci-fi expectations to most of us. In computer science, AI refers to intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. While we have reached an all-time high in terms of AI’s rate of advance, funding and enthusiasm, there is still a wide gap between sci-fi expectations and the realities of what can be accomplished by machines today. It remains very far from reaching human-like general intelligence, but is getting better at accomplishing narrowly defined tasks. So, why all the hype? It is because it is developing quite fast and we have achieved some major milestones in recent years that have contributed to the interest (e.g., speech recognition, language translation, game play, computer vision, and chat bots) and free human hands from repetitive tasks. Another way to frame this discussion is the difference between “narrow” and “general” AI. Much of AI’s biggest successes so far have been in “narrow” AI, i.e. accomplishing a specific task within strict parameters, such as Siri typing a dictated text message for you, or recognizing a cat in an image. There is no notion of self-awareness or general problem-solving skills in narrow AI. Conversely, much of what has captured the public’s imagination over decades has been this fantasy of “general artificial intelligence”. Our faculty have been pursuing research in different types of AI and we also offer educational training in different types of AI, including, classic AI, machine learning, and deep learning. We also routinely employ these techniques and algorithms for developing a variety of solutions for challenging problems.
Companies and agencies have been striving to harness and leverage the power of their data assets for decades. From an IT perspective, Big Data is loosely referred to as any dataset that cannot be managed and analyzed using conventional database, Business Intelligence and Data Warehouse technologies. It is commonly characterized as exhibiting one or more of the following characteristics, namely, Volume (meaning size of the dataset), Velocity (rate of data generation), Variety (referring to traditional structured data as well as growing unstructured data, such as free text, images, audio, and video), and Veracity (referring to accuracy and completeness). Captured Data is growing at unprecedented rates doubling every few years and appears in all shapes and sizes such as business transactions, machine logs, sensor data, maps, graphs, documents, blogs, audio and video. The primary reason cited by organizations for investing in Big Data tools and technologies is to enable better, fact-based agile execution processes and decision making (strategic, tactical, and operational). While focus in the past has been on the different types of platforms necessary to support a variety of use cases (e.g., platforms for batch processing versus streaming applications) and so on, in the future, the technology will become more flexible and transparent to end users. Our faculty have extensive experience in developing methods and tools for harnessing this potential (e.g., data modeling, visualization, high throughput databases, distributed/parallel computing, GPUs, VMs, and hybrid cloud computing).
Data Science is a relative new discipline. From a business perspective, it enables the creation of data products and helps shift the focus from a simple transactional use of data to treatment of data as an asset just like capital and labor. Data Science is the theoretical foundation and a collection of techniques necessary to extract value from data and brings together concepts from modeling, statistics, database theory, mathematics, operations research, computer science, industrial engineering, and several other fields. Analytics pertains to the practical use of Data Science, often in a business context. It is typically categorized as Descriptive Analytics (refers to platforms and processes associated with analyzing historical data and creating meaningful dashboards), Predictive Analytics (focus here is on looking to the future based on history and often relies on forecasting and pattern recognition techniques from statistics to machine learning and deep learning), and Prescriptive Analytics (focus here is on making or supporting decisions to meet goals and objectives of the unit and often relies on optimization methods). Business Analytics is not just about execution of some complex analytical programs, but also refers to what questions to ask, how to interpret results, and how to gain insight from data to affect a desired outcome. This is where a cross-disciplinary effort becomes critical for the realization of the full promise of Big Data. Our faculty have extensive experience in analytics and have a track record in supporting businesses from nearly all industrial sectors.