Syed A. M. Shihab is an Assistant Professor of Aeronautics and Engineering at Kent State University, where he leads the Green and Advanced Mobility Engineering (GAME) lab. He holds a Ph.D. in Aerospace Engineering from Iowa State University and a B.S. in Electrical and Electronic Engineering from American International University Bangladesh. He currently teaches Modeling and Forecasting for Aviation Logistics Planning (AERN 65230) and Introduction to Aerospace Engineering (ENGR 15500).
Driven by an interest in operations planning for traditional and advanced air mobility, Dr. Shihab’s research focuses on developing data-driven, optimal decision-making systems for aviation based on artificial intelligence/machine learning, optimization, and operations research techniques. Applications of his work include air traffic management for manned and unmanned aircraft systems (ATM and UTM), pricing and revenue management, scheduling/dispatching, and route/network planning. His research aims to: 1) maximize the performance of air transport systems operating in high dimension, dynamic and uncertain environments, with respect to attributes such as accessibility, efficiency, robustness, resilience, safety, scalability, passenger welfare, societal benefits and profitability; and 2) overcome the technical gaps and barriers needed to safely integrate the anticipated autonomous, high density, diverse operations of advanced air mobility (such as point-to-point passenger and cargo transportation, emergency medical services, search and rescue, and disaster relief using electric/hybrid, manned/human-in-the-loop/unmanned aircraft) in a modern National Airspace System.
During his academic research, he developed a deep reinforcement learning based airline revenue management system called DeepARM. He has also derived a new operational concept for urban air mobility (UAM) services; developed scheduling and dispatching models for scheduled, on-demand, and hybrid UAM services; and extended these models to include the provision of offering frequency regulation services to the power grid with the collection of batteries in the electric vehicle fleet for generating an additional revenue stream. In some of his other notable research projects, he has worked on building decision support tools for Iowa’s Food-Energy-Water network; investigating unsuccessful cases of systems engineering, such as the Joint Strike Fighter F-35 project; and optimizing design of government-owned large-scale complex engineered systems, such as space telescopes. The most recent list of his publications can be found on his ResearchGate profile and Google Scholar profile. Beyond his research pursuits, he enjoys playing various sports, such as cricket, soccer, badminton, volleyball, and golf, and exploring the world. A fun fact about him is that he experienced and survived a bear ambush in Aspen, CO during a cycling adventure with his friends down there.
There are multiple funded openings available in his lab for motivated and hardworking Ph.D., M.S. and undergraduate students (future “GAMErs”) with strong programming and mathematical skills and a keen interest in developing intelligent decision-making and learning systems. Interested students should email him their CV, transcript(s), GRE score, TOEFL/IELTS scores (for international applicants only), and publications if any. The email should also state their research interests, the motivation fueling their research interests, and why they are interested to join his research group.
• Syed A.M. Shihab, Xuxi Yang, Peng Wei, Jie Shi and Nanpeng Yu, “Optimal eVTOL Fleet Dispatch with Power Grid Compensation and Battery Degradation Cost”, AIAA Aviation, Virtual Conference, June 2020
• Syed A.M. Shihab, Caleb Logemann, Deepak-George Thomas and Peng Wei, “Autonomous Airline Revenue Management: A Deep Reinforcement Learning Approach to Seat Inventory Control and Overbooking”, Workshop on Reinforcement Learning for Real Life, International Conference on Machine Learning (ICML), Long Beach, CA, USA, June 2019
• Syed A.M. Shihab, Peng Wei, and Christina L. Bloebaum, “A Data-Driven Decision Making Framework for Value-Based Engineering Design of Complex Network Systems”, AIAA Aviation, Dallas, TX, USA, June 2019
• Syed A.M. Shihab, Peng Wei, Daniela Jurado, Rodrigo M. Arango and Christina L. Bloebaum, “By Schedule or On-demand? – A Hybrid Operations Concept for Urban Air Mobility”, AIAA Aviation, Dallas, TX, USA, June 2019
• Hanumanthrao Kannan, Syed Shihab, Maximilian Zellner, Ehsan Salimib, Ali E. Abbas and Christina L. Bloebaum, “Preference Modeling for Government-Owned Large-Scale Complex Engineered Systems: A Satellite Case Study”, Conference on Systems Engineering Research (CSER), Los Angeles, CA, USA, March 2017
• Syed A.M. Shihab and Mohammad Nasir Uddin, “Design and Performance Analysis of Centrally Seeded, Long Reach, Cost Optimized Hybrid DWDM TDMA PON,” International Conference on Electronics and Communication Engineering (ICECE), Dhaka, Bangladesh, December 2012
• Syed A.M. Shihab, Caleb Logemann and Peng Wei, “A Deep Reinforcement Learning Approach to Seat Inventory Control for Airline Revenue Management”, Journal of Revenue and Pricing Management (under review)
• Syed A.M. Shihab and Peng Wei, “DeepARM: A Dynamic Pricing and Seat Inventory Control System for Airline Revenue Management using Deep Reinforcement Learning”, Target Journal: Transportation Science (working paper)
• Syed A.M. Shihab, Peng Wei, and Christina L. Bloebaum, “Optimizing Scheduling and Dispatching Decisions for Urban Air Mobility Operations”, Target Journal: Transportation Research Part B: Methodological (working paper)
• Iowa State University, Department of Aerospace Engineering, Course: Reinforcement Learning and Autonomy, “DeepARM: A Dynamic Pricing and Seat Inventory Control System for Airline Revenue Management using Deep Reinforcement Learning”, Ames, IA, USA, December 2019
• INFORMS Annual Meeting, “By Schedule or On-Demand? - A Hybrid Operational Concept for Urban Air Mobility Services”, Seattle, WA, USA, October 2019
• INFORMS Annual Meeting, “Towards the Next Generation Airline Revenue Management: A Deep Reinforcement Learning Approach to Seat Inventory Control and Overbooking”, Seattle, WA, USA, October 2019
• AIAA Aviation Forum, “By Schedule or On-Demand? - A Hybrid Operational Concept for Urban Air Mobility Services”, Dallas, TX, USA, June 2019
• AIAA Aviation Forum, “A Data-Driven Decision Making Framework for Value-Based Engineering Design of Complex Network Systems”, Dallas, TX, USA, June 2019
• AGIFORS Revenue Management Study Group Meeting, “Towards the Next Generation Airline Revenue Management: A Deep Reinforcement Learning Approach to Seat Inventory Control and Overbooking”, Panama City, Panama, May 2019