Introduction
We are a technology software company that develops products but also uses them. Therefore, we are well aware of how our work can affect the world around us. We also care about the mark we leave behind, also due to the very nature of our journey – to make it easier for people to jump on the train of innovation.
We perceive artificial intelligence as a complement to the workforce, not its replacement. That is why we strive to create solutions that integrate seamlessly with companies and their operation. We see the value of AI in freeing people up to focus fully on their profession. For example, nurses in hospitals will no longer have to spend their time on the phone but will instead be able to assist their patients.
In terms of the economy, we also see artificial intelligence in the form of autonomous and semi-autonomous companies that complement existing ones. These companies will increase GDP and human wealth, but not at the expense of unemployment. They will help many small entrepreneurs start businesses in areas where this would not be possible today due to strong competition and huge entry costs.
At the same time, it should be added that Xolution does not subscribe to the ideas and philosophy of NRx, transhumanism, or Dark Enlightenment.
Trustworthy AI
At Xolution, we understand trustworthy AI as systems that do not disrupt the social order, are compliant with regulations (such as GDPR and AIA), and support personal and business diversity.
An important element of our systems is explainability and flexibility of corrections. We realise that artificial intelligence systems are placed in an environment where inputs and outputs are not precisely defined, and thus they work with a huge number of states that cannot be fine-tuned in advance. However, it is important for us to know how to work with them, identify problems, and be able to intervene in the form of corrections. We want AI systems to be under human control.
The purpose of this memorandum is to publicly declare these values, both to clients and to internal team members.
What are we doing to achieve this?
The way we perceive our impact is primarily through the benefits our customers’ clients get from our products. Once we realised that we would not want to communicate with a machine that did not understand us, it became our challenge to explore other ways – ways in which humans can understand chatbots. We do not pursue things that lack meaning just for benefits. If a solution does not benefit the society, then it is not a path we want to take.
Our primary goal is to ensure that the results of our work are aligned with the expectations of our colleagues, clients, and the overall nature of our company. To this end, we constantly discuss our goals and visions at company meetings, which we hold regularly. In a relaxed and open atmosphere, everyone in the company has the opportunity to express their opinions, thoughts, and ideas.
Our company is interested in building a good reputation so that we can all feel good about where we work, be satisfied with our jobs, and identify with what we do. It is important for us to feel a sense of belonging, that we all share the same or at least similar values.
Enforcement and control mechanisms
Throughout development, we constantly consider the implications of our product and service. The knowledge modeller creates a system that is fully controlled by humans, they are responsible for what knowledge is entrusted to the system. In some cases, it is even better for the chatbot not to respond at all (e.g., answers to some questions in construction business require expert opinion, it is therefore impossible for the chatbot to take the initiative and answer the question asked).
A chatbot should not aim to replace communication with humans, but only to replace humans in areas where it is beneficial, e.g. when performing repetitive tasks that take up time, energy, and creativity.
The user should be made aware in advance that they are talking to a machine.
The way the customer wants to work with our product should be approved by us. We see this as a form of protection to ensure that our product is not used for the wrong purposes.
Our solution is also subject to GDPR regulations. Private data is protected in the cloud of the largest provider, Microsoft, and is not used for other purposes (such as advertising, sale to third parties, or training other models).
Users accessing our solutions can request a connection to a human at any time, regardless of the channel they are using.
Our solutions are designed to be minimally invasive. By mimicking human behaviour, they enter a process that was previously managed by a human. The system can thus return control to humans at any point. The process is thoroughly documented, which means that it is graphically recorded and its behaviour can be reconstructed from log files. We know exactly what the system is doing and why.
The solution is monitored by other systems. These go through the log files during the night and look for anomalies. They directly alert the development team or customer success teams to potential problems or recommend changes. The system is also monitored functionally, and failures of individual parts are reported, ensuring that it is always clear which parts of the system are not working properly.
The system monitors human interaction. It records how many times a person has gone “all the way” or where they got stuck. These parts are subsequently reported for review by a human operator to determine whether the robot still meets the criteria for interaction with humans or whether something has changed in human behaviour.
The system runs in the cloud, giving clients access to services such as auto-scaling, which mitigates the impact of DDoS attacks or prevents them altogether. The cloud also offers other infrastructure monitoring options.
The system is capable of detecting feedback. This feedback is reported by the system administrator. It is also possible to monitor user’s satisfaction through ratings or NPS tools that are built directly into the solution.
The development team has received training in trustworthy AI led by external experts in AI ethics and regulation. Sustainability and social impact issues are therefore automatically addressed when designing solutions for tasks that are currently being processed. Any ambiguities or problems (or invalid client requests) are resolved by our team meeting with the client to discuss the issues and propose adjustments.
All solutions are tested by both humans and machines before being put into operation.
The company participates in AI research and publishes its results and procedures through conferences and articles published on its website and LinkedIn profile.
At Xolution, we value our relationships with non-profit and research organizations, we are members of various institutions, and we are open to cooperation with the public and academic sectors because we believe that democracy and freedom must be fostered within each of us and within our company, as well as in our interactions with other organisations and companies.
In our company, we stand behind the GDPR and AI Act standards, and despite their bureaucratic aspects, we recognize their practical benefits and efforts to prevent the destructive effects of certain features of digital systems. That is why our goal is to demonstrate in practice that behind every automation and AI there must be a person and their work... that digital systems must be created by people and for people.
Conclusion
Compliance with this memorandum is binding for the members of the company team and the partners of all companies of the Xolution Group.
References
- AI Act – https://artificialintelligenceact.eu/
- ALTAI framework pre dôveryhodnú AI
https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment - Transhumanizmus, Wikipédia – Slobodná encyklopédia. Verze z: 2023-10-11,
Transhumanizmus – Wikipédia - Dark Enlightenment, Wikipédia – Slobodná encyklopédia, verze z 2025-02-23,
https://en.wikipedia.org/wiki/Dark_Enlightenment - KINIT – Kempelenov inštitút inteligentných technológií
https://kinit.sk/xolution-how-to-transform-business-responsibly - L. Bešenyi, NRx a fašizmus, 2024-04,
https://www.linkedin.com/pulse/techno-dystopia-iii-nrx-fa%C5%A1izmus-libor-be%C5%A1enyi-52sqf - L. Bešenyi, 2025-04, Anti-demokracia a technológie,
https://www.linkedin.com/pulse/anti-demokraticia-technol%C3%B3gie-libor-be%C5%A1enyi-te2ve - L. Bešenyi, 2025-04, State as a Service – Network state,
https://www.linkedin.com/pulse/state-service-network-libor-be%C5%A1enyi-eweqe