Rethinking Optimization: A Call for A Broader Perspective
- Digital Life Initiative
- 2 days ago
- 4 min read

By Pegah Nokhiz (Cornell Tech)
What are the implications of mathematical optimization achieving its goals but neglecting broader impacts? Who gets excluded when decisions prioritize efficiency above all else? Is it possible to address the knock-on effects of seemingly successful optimization, i.e., are these effects accidental or preventable flaws? Is there a need to reshape our thinking to ensure that optimization supports not just outcomes, but the people and systems affected along the way?
Optimization is often regarded as one of the most widely used mathematical decision-making tools in fields ranging from engineering and artificial intelligence to urban planning and business. It offers efficient solutions by reducing complex problems into clearly defined objectives and constraints. However, in its pursuit of efficiency, optimization can inadvertently sideline important social and contextual considerations, leading to consequences that are unexpected and deeply impactful.
A significant issue in the way optimization is currently practiced is that its implementation often fails to account for all the parties affected by its outcomes. Stakeholders who are not part of the initial design or decision-making processes are frequently overlooked. Their interests, values, and lived experiences might never make it into the formal models used. This concept, often referred to as externalities in economics, is a disconnect that results in outcomes that, while optimal in a narrow technical sense, may erode trust, reduce performance, or generate societal harms over time [2, 4].
Importantly, not all such outcomes should be generically labeled as "unintended consequences". This common term is overly broad and risks obscuring meaningful distinctions. It suggests a kind of passive accident when, in fact, many such outcomes stem from identifiable oversights like faulty assumptions or short-term thinking. By lumping everything into a single category, we lose the opportunity to investigate root causes and to design more tailored responses.
To address these challenges, there is a need for a structured approach to identifying the causes and effects of these unintended consequences rather than treating them as monolithic or accidental side effects. It becomes important to ask: What kind of oversight or flaw led to this outcome? Was it due to a lack of understanding of the system’s complexity, a mistake in the assumptions or data, or an overemphasis on immediate returns? Each of these issues has a distinct remedy and requires a tailored strategy.
For example, when a company optimizes a water or coffee station’s access timeline to reduce idle time, the efficiency gain may come at the cost of reduced informal employee interactions. These interactions (although not directly contributing to mathematical productivity metrics) play a vital role in fostering collaboration and morale. By focusing solely on the time saved, the optimization process ignores deeper workplace dynamics on engagements among employees [4, 5, 6].
In another case, using artificial intelligence tools trained on historical data to facilitate hiring decisions can seem like an efficient solution. But if the historical data is flawed, the algorithm can perpetuate or even exacerbate the problem. Such outcomes not only harm excluded individuals but may also trigger regulatory scrutiny and damage the hiring organization’s reputation [1, 3, 7, 8].
To address such situations, there might be a need for both better descriptive awareness and stronger normative direction. First, there is a need for tools to trace and quantify unintended effects, making it easier to adjust goals or rules to accommodate broader stakeholder needs. However, there is also a need to introduce a way of thinking that encourages decision-makers to reflect on the long-term and interconnected consequences of their actions. This duality is key: understanding who is affected and why is not enough; we also need to know when and where to act.
There is a need for a shift from using optimization as a purely technical exercise to viewing it as a socially embedded process, for vigilance against oversights, and for optimization no longer being just a question of what works best but of what works for all involved.
A Call for a Broader Outlook
In light of these insights, there is a need to recalibrate how we design and deploy optimization systems. Regardless of the technical framework one adopts, practitioners, researchers, and decision-makers must have a broader perspective that embraces complexity, acknowledges indirect effects, and is not purely efficiency-driven. The goal should not be to reject optimization, but to evolve it from a narrow instrument of performance into a tool that genuinely aligns with values, overarching societal goals, and long-term design.
References
[1] Sahin Ahmed. 2024. Navigating the Pitfalls of AI in Hiring: Unveiling Algorithmic Bias. https://medium.com/@sahin.samia/navigating-the-pitfallsof-ai-in-hiring-unveiling-algorithmic-bias-9e62b50b3f65
[2] Camelia Bejan. 2024. On the shareholders versus stakeholders debate. Journal of Economic Behavior and Organization, 218, 68–88.
[3] Zhisheng Chen. 2023. Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1), 567.
[4] Marc Fleurbaey, Ravi Kanbur, and Brody Viney. 2023. Social Externalities and Economic Analysis.
[5] Kristie Lynne McAlpine. 2017. Don't Abandon the Water Cooler Yet: Flexible Work Arrangements and the Unique Effect of Face-to-Face Informal Communication on Idea Generation and Innovation. Ph.D. Dissertation. Cornell University.
[6] Sævar Örn Sævarsson. 2022. The post-COVID water cooler effect: the meaning of interpersonal interaction of peers for the future of work. Ph.D. Dissertation. Reykjavik University.
[7] Cindy Gordon. 2023. AI Recruiting Tools Are Rich With Data Bias And CHROs Must Wake Up. Forbes. https://www.forbes.com/sites/cindygordon/2023/12/31/ai-recruiting-tools-are-rich-with-data-bias-and-chros-must-wake-up/
[8] Reuters. 2018. Insight - Amazon scraps secret AI recruiting tool that showed bias against women. https://www.reuters.com/article/world/insightamazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/.
Pegah Nokhiz
DLI Postdoctoral Fellow
Cornell Tech
Cornell Tech | 2025
Comments