Artificial intelligence has made remarkable inroads into the professional world over the past five years. Most large corporations are now fully exploiting the possibilities offered by this technology, which has been described as ‘revolutionary’ – either through ready-to-use applications or by developing their own internal AI to meet specific needs while ensuring the protection of their strategic data.
Towards an inflation of rules and dogmas In a hyperconnected world, norms and standards have become essential to ensure consistency: same causes, same effects. This applies even more so to large international companies, particularly those with direct or indirect links to Anglo-Saxon countries. They are often forced to adopt the laws, directives and standards enacted by the latter. Failure to do so exposes them to costly and complex legal consequences. Some organisations therefore prefer to limit their activities to the local or regional level to avoid these extraterritorial constraints.
AI and the rules When we look at the AI tools currently available to the general public, the impression of a ‘jungle’ prevails: sources are rarely cited, personal data can be used for training purposes unless explicitly opposed (often unclear), and some responses are partially censored or formatted according to the rules of the tool’s country of origin (United States, China, etc.). This situation reveals a considerable legal vacuum, born of the rapid pace of technological progress. Legislation is struggling to keep up, and only strong action at the level of states or regional blocs (such as the EU) could bridge this gap. But will they have the power to influence such powerful economic interests? Only time will tell.
AI and pragmatism: can they coexist? Pragmatism is based on experience and the ability to move beyond ready-made answers to explore solutions that are tailored to reality. Given its algorithmic design, is AI capable of offering anything other than standardised answers based on averages and past data?
A revealing test The following question was put to a well-known AI: What is the optimal duration of an external facility management contract? Here is part of the answer provided: ‘The optimal duration of a Facility Management contract depends on several factors. In general, three years is a good compromise between stability, flexibility and regular performance evaluation. A five-year term may be appropriate if the scope is well defined and a relationship of trust exists with the service provider.’ The answer, well-structured and reasoned, is admirable. However, it is not enough on its own to decide: every contractual situation has its subtleties (clauses, service levels, regulatory environment, etc.) that nuanced human reasoning can better grasp.
The real dilemma of the coming years Professionals trained between the 1980s and 2010s will tend to cross-reference data and compare AI responses with other sources to ensure the reliability of their decisions. On the other hand, the younger generations are likely to favour a ‘copy-paste’ approach, which could lead, at best, to a decline in critical thinking and, at worst, to automated decisions that are ill-suited to the realities of the business.
Conclusion: a balance must be found AI is already an integral part of our professional lives. To avoid abuse, the way we use – and feed – these tools must be subject to rigorous technical, legal and ethical oversight. Are we ready? Not alone, certainly not. But with collective will, it is possible to find a happy medium between rigour, pragmatism and innovation.
To test the capabilities of these tools, my original text has been reworked by AI to improve its substance…
Enjoy reading and see you soon.