Your browser is unsupported

We recommend using the latest version of IE11, Edge, Chrome, Firefox or Safari.

Black-box Optimization Heading link




Black-box optimization methodologies deal with systems of inputs and outputs where the underlying behavior is unknown. There is no well-defined function for the system behavior, and consequently, no derivatives are available. Most of the black-box systems are expensive to run simulations or experiments. To optimize black-box systems derivative-free optimization techniques, which rely on inexpensive statistical and machine learning models, are practical alternatives. The output of a Black-box system can be Deterministic or Stochastic, and the input space can be Discrete, Continuous, or Mixed. There are many engineering design problems such as vehicle design, green building design, material design, etc. that involve expensive black-box experiments. Furthermore, tuning a deep learning model for an autonomous vehicle is an expensive process that can be tackled using black-box optimization techniques. We work on designing different surrogate and sampling techniques to optimize complex large scale black-box systems.

Active Learning Heading link


Active Learning is an adaptive sampling approach for expensive learning procedures. Machine learning methodologies typically assume the existence of a sufficient pool of labeled data points. This is not a practical assumption in many applications where collecting labels is impossible or expensive. Within a limited budget, active learning decides about sample points to be labeled using a single or multi-criteria optimization. It sequentially selects the most informative point or a bach of points to be labeled by an oracle and added to the training set. Different active learning scenarios (Membership Query Synthesis, Stream-Based Selective Sampling, Pool-based Active Learning) and sampling strategies (Uncertainty Sampling, Query-By-Committee, Expected Model Change, Variance Reduction, etc.) have been proposed. We mainly focus on combining different criteria, in particular, fairness measures, for sampling to achieve an optimal representative labeled dataset of the underlying distribution.

Fairness Heading link


AI, and more specifically machine learning, has had a significant impact on the effectiveness of decision- making processes that leverage enormous data for information collection. Besides the positive effects of ML algorithms, they may lead to disruptive outcomes that affect individuals’ lives. To avoid that negative algorithmic outcome, ML practitioners should not use these tools to turn a blind eye to its societal impact. Rather, a fairness measure should be considered at least in one of the data collection, learning, or interpretation steps. This will allow the algorithm to be unbiased towards under representative clusters of observations and adjusts its behavior decorrelating the protected features with the predicted outcome. We incorporate different fairness notions in our data collection and learning strategies to ensure the fairness of our algorithmic outcome.