IT Automation and AI: Finding a way In

Amartya Mandal
6 min readMay 1, 2023

So far we have been trying to solve today's problems with yesterday’s technology, but the course of journey has reversed.

Let’s face it -next 3–5 years we will do everything we could to eliminate human from the equation of software development, delivery and maintenance, next couple of years an AI (or copilot a name not so intimidating ) will shadow us, follow us , measure our inefficiencies, ask very pertinent questions and build an ideal alternative which no one can ignore.

Every equation has two sides to it LHS and RHS, we are aiming to remove humans from both sides, problem as well as solution.

We can hope by the end of the next 5 years our economy can scale better, redefine the way an individual (laid-off software or IT professional) takes part in the market economy and supply sustainable alternatives.

There is a simple formula -

Success of full automation in future is inversely proportional 
to the inefficiency in your organization at present.

We accumulate inefficiency day after day in compound aggregation and bear the burden of a mountain.

Efficiency of an organization will be a real focus in board meetings, stakeholders and investors will have specific questions around it, now that you know the formula, failure to implement full automation cannot be justified with human inefficiency.

Recent success of an automation process fades away with human inefficiency, by organization structure, human negligence, decision without data, data without correlation and same old human insecurity to no letting go of old sentiments.

Let’s try to find out how organization can take advantage of AI in coming months or years.
Too much information, too many options both are tempting and counterproductive, but we can’t stop the madness, at least try to bring some order to the process.

Finding a way in

Organizations and companies can start harnessing the advantages of language models like GPT-4 by following a systematic approach that focuses on integrating the AI model with existing workflows, systems, and data.

Find cases and potential applications: The first step is to find specific use cases and applications where GPT-4-like models can supply value to the organization. These can include natural language processing tasks, automated content generation, code completion, documentation, customer support automation, and more.

Before adopting GPT like models, organizations should assess their data privacy and security requirements. Each of the following concerns/topics needs to be evaluated with the help of a thorough process and utter importance.

Data Privacy and Confidentiality — Ensure that the AI model does not keep, store, or expose sensitive data, such as Personally Identifiable Information (PII), financial records, or trade secrets during the training process. Foundation Model providers and companies adopting a model, both need to work to standardize the process of entry such a way, that satisfy the privacy and safety concerns.
One option would be an Enterprise-focused business models, Companies like Open AI can explore enterprise-focused business models by offering private AI model deployments, managed services, or dedicated instances of their AI models. This approach would allow enterprises to use AI capabilities while keeping strict control over data privacy and compliance.

By the time I started writing this article Open AI has announced, that a Business Subscription on the way, I am curious to read the fine prints.

Large organization can do Private deployment of AI models to host AI models like GPT-4 privately within their infrastructure to maintain full control over data security and privacy.

“BloombergGPT, Bloomberg’s 50-billion parameter large language model, purpose-built from scratch for finance” https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance

Like for an example Open AI currently offers the Open AI Codex API, which powers GitHub Copilot and other AI applications, but it does not allow for self-hosted deployments and that might be a complete “No” for number of organization, even if they want to provide a better developer experience to their developers.

However, one can explore other AI models and platforms that support on-premises or private cloud deployment or using fine-tuned models that restrict access to sensitive data to ensure data privacy. There will be lot of activity in this place and it’s not going to slow down very soon.

We are still adapting to the new possibilities, there is no runbook proven so far, but following 3 activities should be standard in any AI integration project

  1. Data anonymization: Before sending data to the AI API, organizations can anonymize sensitive information by removing or replacing any identifiable information, such as names, emails, addresses, IP addresses, or proprietary terms. This way, the AI model can still supply valuable insights without directly accessing sensitive data.
  2. Data tokenization: Tokenization is another method to protect sensitive information. This technique replaces sensitive data elements with non-sensitive placeholders or tokens, allowing AI models to process the data without exposing the original sensitive information.
  3. Fine-grained access control: Set up strict access control policies for GPT-4 API usage. Limit access to specific users or groups within your organization, ensuring that only authorized personnel can interact with the AI model and the data it processes.

Check Out — Chatgpt privacy with homomorphic encryption
“What if we could have encrypted conversations with LLMs in the same ways that we have encrypted conversations with our friends on messaging apps? Being able to use LLMs without revealing our personal data would unleash the true power of AI, while making both users and regulators happy. And as it turns out, this is now possible thanks to a powerful encryption technique called Fully Homomorphic Encryption (FHE).”

Once Platform level integration is over, operation needs to continue to provide the AI model with access to their information.
For an example, you want to automate a developer onboarding process so that a new resource do not need to worry about the right way to deploy a change in the application without violating any of the organizational policies or it may be the first time a resource in an on-call support.

Usually it requires a warm-up time, onboarding training, going through hours of meeting recording, articles, chances are those findings or solutions mostly outdated without proper maintenance.
But we basically need right information when we need it.

Won’t it be nice if an incident reflects how many times it has occurred in the past? what are the root cause analysis done so far? what article has already provided an accepted solution? when it had been updated last (& if it is inconclusive) and who was the last author, a hint at the side, which already made a comparison of the last updated solution with the most latest articles on the same topic (from internet) for you so that you can just take very informed decision or ping the last author of the confluence article in slack to know further. Sounds futuristic! But it is possible today.

You can effectively make use of the years of experience of all the former or current developers, architect & security experts and thousands of incidents your on-call engineers has documented so far, many of the future issues expected to be resolved by the collective intelligence you have gathered so far!

For an example you keep all of your articles, guidelines, runbooks in confluence document library . Allow the AI to analyze and understand the content so that it can make accurate and relevant suggestions to the every aspects and every process it must follow.
You can achieve it by
Manual Input: You can manually provide the AI model with a list of relevant Confluence article URLs, compliance documents, and policy information. The AI model can then analyze and reference them as needed.
API Integration: Integrate the AI model with Confluence’s API, allowing it to fetch and analyze relevant articles and documentation automatically. This method can be more efficient and save time in the long run.
And Periodic Updates: Ensure that the AI model is regularly updated with the latest Confluence articles, compliance documents, and policy information. This helps maintain the accuracy and relevance of the AI’s suggestions.

Following are few more consideration before you adopt AI, each of the subject matter demands its own articles to explain further and has potential of becoming billion-dollar product.

Model Security assess the risks of model inversion attacks, where an adversary may reconstruct training data from the AI model’s parameters or outputs. Evaluate the risks of membership inference attacks, where an attacker could decide whether specific data was used in the training process.

Data Bias and Fairness, compliance with regulations, auditing and monitoring.
And finally Explain ability and transparency- Ensure that the AI model’s decision-making process is transparent and understandable, allowing stakeholders to interpret and trust the model’s outputs.

Next: IT Automation and AI: Project Management a clog in the system

--

--