. . . A problem can be solved only when we approach it thus. We cannot approach it anew if we are thinking in terms of certain patterns of thought, religious, political or otherwise. So we must be free of all these things, to be simple. That is why it is so important to be aware, to have the capacity to understand the process of our own thinking, to be cognizant of ourselves totally; from that there comes a simplicity . . . These are the lines of wisdom written by Jiddu Krishnamurti (1895 – 1986) in an essay on Simplicity in his book, The First and Last Freedom (HarperCollins 1975). The lines indicate something very important: that sticking to, or dependence on a priori notion or knowledge complicates a problem. It is like the aptly worded Zen Buddhist teaching: empty the mind to see things as they are. Or in Steve Jobs (1955 – 2011) words (his 1998 interview with the Businessweek): simple can be harder than complex: you have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains. Knowledge is important to the degree of making it part of one’s intuition or transformative experience – once done, its use must not cloud cognitive processes. To address a problem as is (the process is generally described in Buddhism as – seeing things through the Purities of Mind and View), the nature of it must be broken down into simple fundamental pieces. The understanding of fundamentals lets things unfold naturally as they are - somewhat like what is generally known as the Reductionist approach. One more reason for doing so is that the observer, the subject (the problem solver) must free himself or herself from affecting the observed, the object (things associated with the problem; see The Quantum World) or projecting his or her own personal pattern of thoughts upon the observed. How are all these relevant for a system such as Artificial Intelligence (AI)? Can AI reach the high level of intelligence to break down its sphere of activities (that in many cases, represents the science of complex systems incorporating human, Natural and Social processes and issues) to simple terms – to not cloud its cognitive processes? Answers to these questions are important for one to clearly comprehend the potentials, limits and consequences of many AI-Powered Products and Services (AIPPS). . . . 1. Some Questions on AI Prospects and Threats Before answering, let us examine the premise further from a different but practical angle. An individual using AI-powered appliances (e.g. some smart gadgets) – faces different dimensions of reality – some are outright good, others are questionable. These appliances are flooding the market – and we have no options other than to getting used to them. A smart phone, a search engine know what an individual’s behaviors are, and modulate the uses and activities accordingly (with the notion to help users!). To do that, AI on these devices must be monitoring user activities to fit them into patterns or define a new one – often with the purpose to profiling and casting them into known statistics (a priori knowledge) – Bayesian or otherwise (see The World of Numbers and Chances). Many of these practices are utterly annoying and even damaging, encroaching upon people’s privacy and what not. Although I have mentioned the two devices as an example – the practices and concerns are ubiquitous in many spheres of AIPPS. When one thinks about the growing amount of frustrations that are upon us – one cannot avoid asking:
An alarming aspect of it – that has been and is being fueled by cyberspace activities – is the proliferation of the process of Vigilantism (the acts of investigation, enforcement and punishment without legal authority) across the globe. It is not difficult to imagine – that the process of self-appointed-policing carrying the agenda of ulterior motives – is responsible for destroying many people’s lives and livelihoods with virtual impunity. One should not be surprised if it appears – that impersonation, extra-judicial surveillance, lynching and other heinous activities – in zeal and intensity shaming even the medieval and past historical atrocities – are in action targeting mostly the minority communities. One may cry for rule-of-law, democratic or otherwise – but no governing institution/authority appears powerful enough – or using the process for its own purposes – or is feeling the responsibility to stop the process. As an answer to the last question, it is important to realize that AI developers and industries – for that matter all businesses prefer to be self-regulated – it is the mantra in any society (the enormous boost to this mantra came in the 1980s during the rules of British PM MH Thatcher, 1925 – 2013; and American President RW Reagan, 1911 – 2004; the boost has raised the level of social inequity that has only mushroomed profusely in present times. Further, funding to research institutes and universities was given a different approach and focus; gov funding catering to neutral research was let to drain out by giving the system a boot. Instead funding from corporations and special interest groups was encouraged – thus introducing a biased system to augment the business-interests of private entities). But they also want to have some form of rules and regulatory framework – otherwise, it is difficult to do business in the chaotic world of an unruly jungle. While they want as such, industries and businesses are, as well, afraid of over-regulation – regulators overreaching their power and mandate (some manifest as the sources of bureaucratic corruption and hurdles), thus inhibiting the thrival, creativity and business innovations. But well-thought-out guidelines and directions are good for all businesses (as they hinder the growth and survival of unscrupulous businesses). {For clarity of understanding: a regulation represents a set of rules and guidelines that are formulated and exercised by a government oversight agency. They come in the form of codes and by-laws, and all participating entities for whom the regulations apply or intended, must abide by them}. There we are. While thinking of some questions we are forced to ask more questions – in particular, on ethics and security grounds. It is reasonable to say that at this time, AI world is rather unruly, focused mostly on money-making at the cost of everything else – with no clear and specific regulatory ethical directions to limit and control some AIPPS. Societal leadership is a failure in that respect – unable or unwilling to understand and deal with the harmful consequences. However, on the backdrop of these crucial aspects, there are also some AIPPS that are outright beneficial – such as: a smart faucet closing-off the valve if not in use for sometime, or a smoke alarm switching on the sprinkler to prevent fire hazard. People must appreciate many such available or potential good AIPPS, but must also be wary of bad ones. Perhaps the power of AI was first demonstrated in the chess championship between a computer and a chess grandmaster. This happened in 1996 and 1997 when the IBM computer Deep Blue was able to defeat the then chess grandmaster Garry Kasparov (1963 - ), first in one game – and then in the whole six-game match in 1997. Today, many game apps are on the market – where a ghost AI player can observe the human end-user’s playing strategy; learn from such moves and adapt to developing counter-strategy like a worthy opponent. Let us attempt to understand all these issues in simple terms. It is an attempt by a non-AI professional – but a keen observer – trying to delve into the basics and application issues of AI – let’s say – from a different perspective. As shown in the attached image, to make things simple – I have attempted to break down the AI system into three fundamental but interconnected phases or subsystems. Some of it represents what people aspires AI-system to be – rather than what it is at present. Missing from these phases are the roles of human mind. The roles cover such mental faculties as – curiosity, imagination, inspiration, creativity and sublime qualities (see The All-embracing Power of Sublimities). These attributes are as much a function of intelligence as of the mind (see The Power of Mind). A human - for all different reasons - may rise up at one time or other, for example, saying - enough is enough, let's forget all these, let me try to clean my thought processes, think plain, straight and simple to get to the bottom of this. Can AI do this like a human? The reality that some of the mental faculties may never be replicated by AI (even if it does replicate, it will be something else – not the mind as we know it), precluded the inclusion of mind phenomena in the image. In the Natural Equilibrium piece I have written: human mind is such that no two individuals act or react in the same way to identical stimulus. Even if the reactions can be cast into statistical patterns – some degree of uncertainties will always define them. This piece is laid out by first browsing through some of AI basics, then revisiting our common understanding of ethics, morals and laws. While elaborating those – I have also tried to spend sometime to discuss/explore some AI-powered applications in the applied physics of civil engineering. Drawing up this piece is made possible – by information gathered from different websource articles and essays, written by many authors – it is as much of a learning experience as of a presentation. . . . 2. AI in a Nutshell What is AI? Imagine a software (SW) that can work (or imagine it as a person as in robotics), and can be used as a tool – openly and/or surreptitiously on the platforms of cyberworld and machines – with no potential limits whatsoever on the arena of replicating the known. Even in the cases of some unknowns, its algorithms can tap on memories and library of information to figure out answers. The limits, not only refer to the scientific capabilities of AI itself – but also to the fact that societies have not imposed, and perhaps have no intention of imposing specific regulatory limits on AI (so far, not even in scenarios of harmful effects – both short and long term). As pointed out earlier, the only limit AI faces, is replicating the many functions of mind. One way of thinking AI is to simply imagine it like a child, but a machine – its smartness or lack of it depending on the AI programmers/developers who are its parent and teacher. So, if parenting and teaching are good and smart, AI can become smart too – the machine in this case, is obedient and an avid learner – while a child may not be as such. Like a child can be trained to stay safe and secure and to protect its privacy - so an AIPPS can be programmed to behave as such (depending on the willingness of the programmer and developer). And to behave as such while in action by respecting the users for whom it is meant - both in learning from the user and in delivery. AI is designed to perform the tasks of continuous machine learning (ML) – to refine approaches and methods mimicking the likeness of human intelligence – and adapt accordingly. In a nutshell, such a powerful SW is AI (~ 1955; American computer scientist, John McCarthy, 1927 – 2011; is credited to be the pioneering father of AI. The first Nobel Prize on AI in the Physics category – was awarded in 2024 to American physicist, John Joseph Hopfield, 1933 - ; and Canadian cognitive psychologist, Geoffrey Hinton, 1947 - . The latter is credited to be the pioneering father of Artificial Neural Network). On one hand, it has the potential to take some human achievements to a fast lane, and do the unmanageable difficult tasks. On the other it can be used to cause harm. Both of which have been and are proliferating at an exponential rate. People are watching the development and performance of AIPPS with awe and optimism – but are fearful and wary of high risk that AI applications could tempt unscrupulous quarters to deliberately conduct unwholesome activities. . . . 2.1 Biological Neural Network AI – to be exact, the Intelligence of Machines – aims to design and program machines to mimic human intelligence – to perform (almost) like humans. In order to do so, AI initiatives must begin understanding the system of Biological Neural Network (BNN) – in humans and other creatures. These understandings lay the foundation on which to build the replicating efforts of AI – but it must also build into its algorithm the ever-expanding knowledge, theories, and principles – developed in various disciplines of science, technology, and social relations. Needless to say that the motivation behind such launching of AI initiatives comes from natural progression of human quests to improve upon the socioeconomics and quality of life and livelihoods (see more on The Wheel of Life). Human BNN is composed of neurons or nerve cells and inter-communicating signaling mechanisms – that work on the complex interactions with itself, and with the surroundings of Nature and Society – collecting and communicating information through the 5 body senses. Buddhism (6th century BCE) goes further than that by including mind as the 6th sense. This non-material sense is the forerunner of everything one does – with the boundless capacities to roam around in space and time – to collect information and modulate the cognitive processes together with the five material senses (the processes of the Five Aggregates; see The Power of Mind). Working on the paradigm of the laws of Transience and Dependent-origination or the cobweb of interdependence – the system-processes of the 6 senses (controllable by individuals who own them), lead to the development of intellect, smartness and cleverness – that direct one to the decision making initiatives or undertaking (see Leadership and Management). On top of that – the action-reaction-duo calibrates itself – with the individual’s and societal ethical/moral compass. Individually, each of these 6 senses lets the observer to develop a particular image of the observed. But if and when all the 6 senses are combined, the totality of the observed takes shape. For example, when one sees a lion one recognizes it as such – when one hears its thunderous voice one knows more of the creature – and so on. And when one uses the mind, he or she develops the wisdom of the rationality of its existence – its past, its evolution, its belonging to the Nature, etc. . . . 2.2 Artificial Neural Network AI relies on modeling the BNN cognitive processes to develop Artificial Neural Network (ANN) - a process known as Deep Learning or DL of machines. Artificial neurons or computational nodes use mathematical models to process information collected through input nodes and deliver the processed/computed information through the output nodes. The system of nodes and flow-paths are adaptive – in that the system structure and flow-paths change direction depending on the nature of input. They form a complex loop of interacting activities of collecting information, processing them through cognitions, to intelligent decision making. The final phase creates new information/memories of inputs – so the system-loop goes on. This mechanism of adaptability responding to input signals or stimulus constitutes the programming intelligence. Some example applications of AI are: speech recognition, image analysis, adaptive control to create video games and robotics. By virtue of interdependence and interactions, the activities are far from deterministic – they are rather laden with uncertainties of stochastic processes (see Uncertainty and Risk and The World of Numbers and Chances). Here is a simple parallel. Closely similar to ANN but without the capabilities of adaptability – is the NETFLOW water modeling SW (Delft Hydraulics; the model developed in the 1980s, perhaps metamorphosed into something else now; had the opportunity of using it for a short time during my career). This model consists of nodes and channel networks – where a system of efficient and fast solution scheme was implemented staggeredly – with nodes solving the mass-balance equation, leaving the channel networks to solve the 1-D Saint-Venant momentum-balance equation. The logic implemented in AI, is powered by digital processing controlled by algorithms of adaptability. This is in contrast to some of our day-to-day computer uses – which are powered by the digital processing controlled by rigid algorithms. It is rigid, in a sense that programming logic is designed to work one way dictating the outputs in response to certain input signals, or an intended purpose. The comparison of the two approaches, leads one to infer that AI is close to replicating the processes of fluidity and multiplicity in Nature and Society (see The Fluidity of Nature; Duality and Multiplicity in Nature; Social Fluidity; and Duality and Multiplicity in a Society), and therefore is very powerful. How does AI-ANN accommodate the sense organs of BNN? In AI robotics all the 5 senses are developed and have straightforward applications – some at an advanced level than others. For AI programs that sense and collect consumer behaviors remotely, the inputs come through – micro-camera as the eye; microphone as the ear; keyboard or touch screen as the sense of touching. The smelling and tasting senses in remote AI operations, are perhaps not that much of an essence – as AI-focus is on information processing. However the remotely gathered three-pronged information can be labeled, or an attribution map can be chalked out – to designate them as of good or bad smells and tastes (figuratively speaking). Again, as pointed out earlier, many aspects of the 6th sense mind phenomenon are not programmable by AI, not yet. The reason is that AI, by its very nature - has no freedom of simplicity - being constrained by the intricacies of algorithms of machine learning and delivery or action. However at least a portion of it is done through psychological profiling of the consumer/user. Consumer research, social media data and data collected through other sources are used to determine what are in demand and what are not. The determined profiles form the basis for developing, or feeding into the stereo-type psychological notions of groups – ethnic, religious, age, gender, etc. Psychological profiles (there are serious ethical questions about this practice) are inadequate to cover the domain of mind entirely, which is too broad and complex – therefore perhaps many aspects of the mind phenomenon are left out of the equation. One such aspect – for better or worse – is that AIPPS can avoid many emotional ups and downs that define a living being. . . . 3. Ethics, Morals and Laws Ethical aspects of AI system are highlighted twice in the attached image. The prevailing level of ethics, morals and laws define the standard of a society one lives in (see Social Order) – and many of them are not yet the integral part of the present-day AI algorithms. But I have included them – to stress that they should be, or required to be built into the AI systems. Because by not doing so, the vendors, users and society at large are being lured to conduct activities that are not always compatible or in sync with ethics, morals and laws. There seems to be a lack of thorough and comprehensive evaluation of the magnitude and extent of risks associated with unethical behaviors and damaging impacts. This is especially relevant for many social media and internet search engines that are already occupying most of our times. Started in the nineties, the internet communication platforms totally revolutionized the way we communicate and do business. But the platform is also being abused, both by the owners and users (in the mass media such as TV and radio networks, only the owners or broadcasters can abuse the system). Many of the ethics and regulation questions discussed earlier also apply to the internet IP addresses. With this, let us browse through some known definitions of ethics, morals and laws. . . . 3.1 Ethics and Morals Ethics (antonyms: corruption, dishonesty, indecency) stand on cultural and moral attitudes of a society – and unlike laws, many of them are universal, rooted deep in human social and cultural evolution. Unless encoded into laws, they do not carry the legal authoritative weight of the government. They are rather used to regulate the conduct of members affiliated with professional societies. These societies enforce ethics through licensure agreement and by-laws. Ethics and morals (antonyms: wrongdoing, unfairness, impropriety) are generally reconcilable, but they are not same. While the former refers to the decent code of behavior of a group or society – the latter refers to an individual’s righteous code of conduct (such as those encoded in religious scriptures; e.g. Revisiting Jataka Morals - 1; Revisiting Jataka Morals - 2). All businesses and organizations have some form of transparent ethics code in their policy document – most of it is tuned to regulating the customer and staff behaviors – but, if any, they are very scanty on the behaviors of executives and business dealings. In the Duality and Multiplicity in a Society piece I have written on good and evil: . . . minimizing the duality gap on the ethics ground means that the evil at the trough needs to move up in an attempt to reach the level of good at the crest. Ethics on Social Media and Internet Search Engines. We have discussed them earlier in the contexts of asking questions. People hear outcries now and then (e.g. the portrayals in the 2020 Netflix documentary The Social Dilemma) about different social media platforms and search engines. Not only do these platforms harvest the user data, but also configure and influence/control consumer behavior. Apparently such practices are not preventable by existing privacy laws (which prohibit divulging of information – but not using them). Or by the general framework of ethics code that most organizations and businesses have in their policy documents. At least, three internet use/abuse issues catch people’s attention: (1) unethical and illegal – breaking into someone’s email account; (2) unethical and illegal – targeting a computer to control its functions by surreptitiously installing malware; and (3) unethical but legal – surveillance or monitoring of web-traffic by employers and governments. These three common practices as prevalent as they are – should not define a society (see Social Order). One knows too well how true these malicious practices are – not only what are highlighted, but also the fact that they and different mass media outlets provide platforms – to the activities that propagate and nurture fake and unwholesome materials. They are constantly luring consumers to see things through their lenses. And many people are consuming them – some consciously, others are not aware that they are being influenced/manipulated. Such activities perhaps have roots in the tabloid newspapers – which have been in business for more than a century (first started in London in 1903). Carrying and publishing misinformation (misleading information) and disinformation (distorted, deceptive information), including other news forms such as rumours and propaganda - all termed loosely as the so-called fake news - have been in practice since long time – in tabloids, also to some extent in main media (electronic and printed). Now, AI has powered all such media outlets – more so, the social media – to deliberately skew information to their choice and liking. Tainting the news culture by promoting the proliferation of misinformation and disinformation – such deliberate practices basically shelved whatever honesty was there in the past. The trouble with such dishonest practices is that – it is becoming increasingly difficult to judge what are fake – and what are not. It is not entirely their fault however, the societal business models allow them to conduct as such, to attract marketing advertisement (even the malicious ones) dollar$ (thus the onus of responsibility for bad behavior falls partly on the shoulders of advertisers). These models take advantage of the common human tendency to remain busy – in socializing, entertainment or productive activities. Such trend of internet dependency and surfing - which is only going to proliferate exponentially - has given rise to the necessity of coining a new term to describe the users - the Netizens. Further, it is hard not to notice the prevalence of some alarming activities that are designed to target particular members of the society – to humiliate, dehumanize and morally degrade their spirit. Often wrapped as advertisements (texts and digitally adulterated) – the products find its way through different mass and social media platforms (including land phones that have been upgraded to digital communication). One such alarming and disturbing aspect is the growing practice of Personalized Advertising/Marketing. Based on tracking consumer/user behaviors, likes/dislikes, habits, personal and social situations – by one of the methods branded as cookies - the practice is basically dictatorial – programming the targeted consumer to see things through the advertiser’s lens. A Note on Business Ethics. Societal business models seem to work on the paradigm that corporate industries and businesses (who are not accountable to people, but to themselves and the shareholders) are angels – requiring no regulation. The governing system of such a society, therefore shuns away from the responsibility of setting the limits to AI proliferation (even to prevent it from drifting toward the wrong direction). Therefore, the populace has no option other than to depend on the goodwill and grace of AI programmers, developers, executives and their business motives. But isn’t it utter naiveté to expect such behaviors from AI personnel (primary motivators for bad behavior of these individuals: the lure of promotion to higher hierarchy, and pay-raise) and entities – whose decision-making process – is overwhelmingly motivated or governed by what makes money (see Governance) rather than what are important and good for the society. Also, it is important to note that AIPPS depend on adapting to the existing models in science, technology and social interactions, and/or to the modeling of collected information. Therefore it is important for AI-system to shun away from selecting the biased and misleading sources – instead, AIPPS should be developed by selecting the neutral and defensible science – to remain unquestionably fair to all. . . . 3.2 Laws Before anything else, one has to have a clear view of the two broad categories of laws. The first is the Universal or Natural Law or the Universal Truths that govern everything including humans and social interactions - like the interactive Systems of Fluid, Solid and Life. The Fundamental Laws of Nature are one such sets of governing laws. These laws are not enforced by any regulation - but rather by individual and societal actions and reactions - by their choices or negligence. They are the illuminating torch in one's life, one becomes vulnerable to suffer when he or she becomes ignorant of them, and does not care about their importance in life and social living. The other laws, as commonly referred to in human activities are the checklist laws, or RULES to be precise - made by a human society for enforcement in their respective jurisdiction. The former is closely related to moral and ethics - while the latter may or may not. Let's talk more about these mundane checklist laws in this piece. Further, let me begin by quoting Immanuel Kant (1724 – 1804): In law a man is guilty when he violates the rights of others. In ethics he is guilty if he only thinks of doing so. This saying, not only lays the fundamental difference between law and ethics – but also points to the fact that one should be mindful and heedful of one’s thought processes – because mind is the forerunner of everything one does (Gautama Buddha - The Tathagata; 624 – 544 BCE). Laws (antonyms: anarchy, disorganization, violation) are broadly drawn from societal ethics (but sometimes the two can be in conflict) to allow or prohibit certain behaviors (laws of most countries trace their roots to the British Common Laws, or to the Civil Laws of Napoleonic Code and German/Roman dominance). But laws differ widely among different jurisdictions (the more the discrepancies, the more the proliferation of lawyers), and even some are not up to date. Contrary to the values laid down in societal ethics, some laws of the land have elements of judicial travesty or covert avenues to protect special interest groups at the cost of denying the same legal rights to others (there are many examples of such practices; two important ones are: the interests of privileged corporate entities, the wealthy and elites enjoy promotion and protection in a capitalistic society; communism came with a slogan to protect the interest if low-incomes. Victor Hugo {1802 – 1885} in his famous novel ‘Les Misérables’ depicted to what extent the 19th century judicial system in Europe was brutal to the poor – the novel portrays the inhuman case of a five year prison time for stealing a loaf of bread for seven starving children). Laws are laid down by, and carry the authority of a governing body for compliance and enforcement, and punitive sanctions – differentiated in Civil, Criminal, Private and Public Law categories. Inking of some laws often lack thorough research and rigor – in particular with their interpretation and enforcement. Ambiguous or loop-holed texts in laws result in interpretive fights – leading to the vicious and unproductive cycles of law-suits and counter suits. It is interesting to note – how the question and relevance of morality, ethics and law differ among people depending on where they stand on the social stratum. For example, a lawyer, an official sitting on the justice bench or a member of law enforcement – see morality as irrelevant if it is not part of the enacted ethics, bylaws, regulations and laws. For truly religious people or those who emphasize on social morality – the question of moral values appears very important. They view moral values as the guiding principle defining societal well-being – and as the foundation on which enacted laws must be based. In the 257th verse of the Dhammatthavaggo or the Just Chapter in DHAMMAPADA, the Buddha said: He (or She) who does not judge others arbitrarily, but passes judgement according to truth, that sagacious person is a guardian of law and is Just. . . . 4. AI and Civil Engineering AI routines and algorithms that can think almost like a civil engineering (CE) professional, are in different stages of research and development. AIPPS in CE sectors have the potentials to enhance the capabilities of analysis and computation, streamlining performance, and helping with examining and screening the solutions of a particular problem. A repeat of cautions pointed out earlier, is warranted here: AIPPS are as good as (or as bad as) the resources they utilize, therefore there have to be some form of CE judgmental checks on the AIPPS performance. We often hear about an acronym GIGO - Garbage In Garbage Out. Let me attempt to briefly outline some AI applications in the broad arena of civil engineering – that are and will potentially enhance the capabilities of Turning the Wheel of Progress in the forward direction. Introduced in the 1980s, AutoCAD (CAD - Computer Aided Design) revolutionized the drawing practices of designing engineering elements. Its evolution walked through – from the platform of mainframe computers to the present-day platform of personal computers – with several variants interacting with other different but compatible SW. An AI app, cashing on the wealth of existing knowledge and experience – can take CAD designs further by learning from the user info and intention – and proactively guiding him or her through to overcome difficult problems. Sound engineering depends on the quality and quantity of data it uses. There have been enormous technological advances in measurements – in aspects of resolution, extent and duration. Some of these are already powered by AI, or can be enhanced further. Numerical water modeling (see Water Modeling) – catering to the hydraulic engineering tenet of CE – has come a long way from the dependence on mainframe computers to its present stage. Improvements have occurred in all aspects – from the processing power of computation to the integration of modules – through dynamic coupling. Pre- and post-processing, display, visualization and animation of inputs and outputs have reached a level – unimaginable just few years ago. Computational mesh both in space and time has seen flexibility – even adaptability in some cases (so model instability and crashes are things of the past). Cashing on the library of information and input that are on the internet (that can be subscribed and purchased) – AI can power modeling efforts substantially – from adapting to the specified data – to visualization, quality checks and validation of model results. Model validation issues highlighted in Water Modeling and in an ASCE Discussion Post indicate – the iterative loop framework of MRCAP – Model-Reality Conformity Assurance Processes. It is important that an AI powered water modeling suite integrate such a framework – with the aim to alert and help modelers to go through the processes – before the product is validated and certified. Perhaps Cloud Computing Services are an answer to many such issues – that facilitate pooling of resources assembled and managed by the Provider for the convenience and efficient performance of the User. The services are provided in three primary categories – the Platform, the Infrastructure and the Software – for computing, data storage, networking and others. They are mostly controlled and managed by the provider with some freedoms of management by the user. Privacy, ownership and security are some of the concerns – especially for the Public Cloud Services. Apart from that - some cases in point, the examples of real-time simulations: the hydrodynamics (one example, hydrodynamic model to build the Denmark-Sweden causeway, commissioned in 2000); the Wave Watch models; atmospheric circulation, and tsunami (see Tsunami and Tsunami Forces) and storm surge (see Storm Surge) modeling to forecast hazards (see Nature's Action). These are some of the advances that are destined to benefit further from AI power and progress. Generative Design System (GDS) is one such AIPPS in civil engineering applications. It is programmed as an efficient and fast computing AI optimization algorithm – that generates multiple iterative solutions based on specified inputs and constraints. The optimization process lets the solutions be streamlined – by getting rid of excess or unnecessary materials. The solutions to a given problem or part of problem are filtered and ranked according to the given criteria, conditions, goals and objectives. The constraints – in terms of such items as design and computational tools, scope, costs, time, choices, and effects – are specified by the user/engineer. GDS belongs to the same genre as GPT – the Generative Pre-trained Transformer. The Transformer is an AI learning system based on the library of information as well as on conversational dialogue or Chat. The GDS is getting immense popularity in every branch of optimization modeling, engineering, management and decision making works – one particular reason being the benefit of cost and time-saving in generating multiple scenario-based solutions. GDS with its 3D visualization outputs and 3D printing has immense potentials – provided the system is scientifically and ethically sound, heedful, diligent, balanced and harmonious to the Life System. The same premises can be reasoned for structural and geotechnical engineering tenets of CE. Some existing analysis and design SWs in these two tenets have AI elements in them. It is only expected to advance further in time - Digital Twin Models - the exact replica of the physical objects in the digital domain is one such example. The model not only enhances the planning and design process capabilities, earmark bottlenecks and problem areas in real-time - but also prompts and set directions to efficient and timely management of project components. A NAP 26894 publication has highlights on various aspects of this method. Engineers are dependent on Standards, Codes and Manuals for planning, design and implementing a project. The guidelines they offer in overall or jurisdictional contexts - can become part of AI resources for it to learn. Once an AIPPS offers such a facility - it will greatly enhance an engineer's works. In the meantime, on 15 February 2024, I served as a co-panelist in the virtual round table ASCE conference on The Future of AI in Civil Engineering. American Society of Civil Engineers can be contacted to know more about it. . . . 5. AI and the Future Human entrepreneurial motivation comes from making money or profit. Without this, AI or any other human enterprise would not have seen the dawn of light (see more on Turning the Wheel of Progress). But then the virtually limitlessness of AI systems lets one to ponder over more questions. Two of them are: (1) why replicating or mimicking humans to replace some or all of its activities? (2) is human race in danger of loosing its cognitive capabilities so much so that its functions are going to be incapacitated? The answer to the first question lies in the nature of human quests – that have made us different and in control of other creatures – plants and animals. Human pursuits move on, because that is how we are (see more on Gift of Science & Technology; Natural Order; and Natural Equilibrium). But then, the ultimate aim of AI is to replace such human quests altogether – seeking for something that can even explore us, and for us. Is it something smart to do? What are the long-terms goals and consequences? The other part of the answer can simply be thought as: humans have a natural affinity for automation (although AI is more than automation), thinking that some of our difficult (even undesirable) activities can be done by someone else. As deplorable as they are, during the colonial and medieval times, this gave birth to serfdom and slavery. During the beginning of industrialization, electrical-mechanical automated production lines, and streamlined distribution/marketing systems appeared. Therefore, it is safe to say that we like comfort and like to enjoy the fruit of hard labor done by someone or something else – AI provides that opportunity. But AI has also an obligation not to do things – at the cost of foregoing due diligence on consequences, and shunning away from responsibility. The answer to the second question is perhaps not yes. Well, not yet. Although the long-term consequences of dependence on AI may prompt one to say otherwise at sometime in the future. In addition, people questioned about the possibility of some future grim scenarios – e.g. whether or not meaningful human engagement in jobs and others can be jeopardized. No one wants to think of such a scary long-term harmful consequence now – but something that may haunt humanity in the future. . . . 6. Human - AI Teaming Supports of many discussed AI issues – have gotten further clarity and explanation in the 2022 NAP Publication #26355 – Human-AI Teaming. This publication sheds light on some highly likely limitations and Human-AI teaming interactions – that an AI developer must address and pay attention to. Although, the document is developed for the defense establishments – the discussions and conclusions the authors came up with – are insightful and applicable, by and large, to all different areas of AIPPS in different degrees. Limitations: (1) Brittleness: AI will only be capable of performing well in situations that are covered by its programming . . . ; (2) Perceptual limitations: Though improvements have been made, many AI algorithms continue to struggle with reliable and accurate object recognition in “noisy” environments, as well as with natural language processing . . . ; (3) Hidden biases: AI software may incorporate many hidden biases that can result from being created using a limited set of training data, or from biases within that data itself . . . ; (4) No model of causation: . . . Because AI cannot use reason to understand cause and effect, it cannot predict future events, simulate the effects of potential actions, reflect on past actions, or learn when to generalize to new situations. Causality has been highlighted as a major research challenge for AI systems . . . Human-AI Teaming Interaction: (1) Automation confusion: “Poor operator understanding of system functioning is a common problem with automation, leading to inaccurate expectations of system behavior and inappropriate interactions with the automation” . . .; (2) Irony of automation: When automation is working correctly, people can easily become bored or occupied with other tasks and fail to attend well to automation performance . . .; (3) Poor SA and out-of-the-loop performance degradation: People working with automation can become out-of-the-loop, meaning slower to identify a problem with system performance and slower to understand a detected problem . . . SA refers to Situation Awareness; (4) Human decision biasing: Research has shown that when the recommendations of an automated decision-support system are correct, the automation can improve human performance; however, when an automated system’s recommendations are incorrect, people overseeing the system are more likely to make the same error . . .; (5) Degradation of manual skills: To effectively oversee automation, people need to remain highly skilled at performing tasks manually, including understanding the cues important for decision making. However, these skills can atrophy if they are not used when tasks become automated . . . Further, people who are new to tasks may be unable to form the necessary skill sets if they only oversee automation. This loss of skills will be particularly detrimental if computer systems are compromised by a cyber attack . . . , or if a rapidly changing adversarial situation is encountered for which the automation is not suited . . . Further more on the NAP document review is in an ASCE Discussion Post. . . . 7. Some Concluding Remarks Attempts to highlight and discuss various aspects of AI facts, concerns and recommendations – have made this piece a long one. These aspects are worth paying attention to – especially on the paradigm of dissecting them to simple terms. And most of the discussed ethical concerns – are similarly applicable to different seats of power – e.g. corporate empires and government entities (see Governance; Democracy and Larry the Cat). Perhaps some concluding remarks are helpful. AI has virtually no potential limits – either on its sprawling scientific capabilities or on navigating through regulation-free wide arena of various human activities. Only limit appears to be replicating all or some aspects of the mind phenomena. It is because - as highlighted in the beginning - AI, by its very nature - has no freedom of simplicity - being constrained by the intricate loops of algorithms of machine learning and performance. AIPPS represent a natural progression of human quests to improve upon lives and livelihoods. Its huge potentials to transform human civilization are beyond doubt. But such potentials ask for shouldering the responsibilities by all, including the systems of funding and advertisements. These cash-injecting powerful systems dictate the path – for AI entities to follow. The performance of AIPPS is as good as the models they implement – therefore it is important to select neutral ones that are not tainted with inclinations of any kind. AI is a high impact global phenomenon - like a Frontal Wave Force Field (more in Force Fields) - therefore, its issues and associated consequences must be addressed as such before it is too late. An alarming abuse and application of AIPPS - allows users to create deepfake (something altered and manipulated by AI applications to show that someone is doing or saying something that was not actually done or said) images or recording to tarnish images and reputations of the victim. It is important for national and international authorities to collaborate to chart out thoughtful directions for AI entrepreneurships to follow - to guard against proliferating abuses. But directions should not come at the cost of choking entrepreneurial initiatives. Also one has to realize the fact that a society cannot and should not rely on the goodwill of some programmers and their organizations for things to move in the right direction. Finally, some ray of hope. Members of European Parliament (MEP) adopted a comprehensive AI ACT on 13 March 2024. China began enforcing its AI Regulations from August 2023. Other countries are lagging behind. But in October 2023 POTUS issued an order to AI developers to share their data with the Gov - hope, more comprehensive measures would come. The 21 March 2024 UNGA resolution - adopted unanimously by all 193 member countries - asked all to safeguard human rights, protect personal data and monitor AI activities - to go to the direction of global governance of AI - rather than letting AI on the driving seat. Before divulging AIPPS out into the market place – it is important that they are calibrated with the societal moral and ethical values – to adapt AIPPS to them. Otherwise bad socioeconomic entropy (see Entropy and Everything Else) will creep in – to spiral down what common humanity has achieved. Both the user and the developer/programmer have the responsibility not to abuse the systems of AIPPS - during its development, application and communication to and from A to B. AI with its huge potentials can drag humanity more to the direction of Mechanical Civilization, therefore society should be very careful of the looming consequences. Perhaps it is important to slow down somewhat by phasing out progresses – to have time to reflect and evaluate the impacts of AIPPS on the future of mankind. I like to dedicate this piece to the worldwide victims – whose trusts have been broken by entities responsible for upholding and honoring them. . . . The Koan of this piece: I listen I read I see I learn I talk I write I do I teach I lead I want to learn more I investigate I imagine I create. . . . . . - by Dr. Dilip K. Barua, 22 January 2021
4 Comments
|