Good news. You’ve decided to set up a monitoring project. You’ve asked the right questions before starting and now you need to move forward to the next step: defining the scope of the project. As a result of the digitization of businesses, monitoring has become a must-have IT resource control element. This is both a critical and structuring project in which the IT department and Business Management are stakeholders. How can you define the scope of your needs to make your project a success, both internally and externally? Here is a brief review of the essential steps in defining your needs.
Define the catalog of services to better identify critical (or not!) applications
The first, but by no means least, step is to analyze and sometimes rethink your catalog of services. What services does the IT department offer its users? This vast subject covers areas as diverse as the daily working environment (with Internet access, VPN connections, file sharing, collaborative messaging), the HRIS (with its intranet, its leave and payroll management) and all the applications that guarantee Business performance (CRM, sales management, production, accounts, etc.).
Making a list of the services is a good thing, analyzing and identifying the most critical and strategic applications is even better. This prompts to reflect upon the expected and offered service level, mostly linked to ITIL methodologies.
Identify the data to collect so that you don’t overlook the Holy Grail
Since we can now easily collect and store data, monitoring data has become the Holy Grail! You should, however, be careful of staying in context: with the multiplication of data sources and the increase in collected volumes, the question is no longer really about what data to collect, to which I would answer, ALL OF IT, it’s more about defining the level to collect it on.
- Technical infrastructure data: this good old data we can’t do without remains relevant (it’s the very essence of monitoring), but it needs to be considered differently because of the profound changes that business infrastructure has undergone: cloud, IoT, mobility, which all contribute to making data collection more complex (and can even increase the number tenfold for IoT for example).
- Application data linked to processes and services: since business and IT are now permanently linked, application monitoring has become essential to controlling the IS. It’s difficult to imagine a business that can’t bill because the accounting software is unavailable, or that can’t answer its customers because the CRM access is down.
- User experience data: Beyond application performance, the user experience is also becoming fundamental. It defines the correct use of applications by users and their appropriation of them. This is an especially “touchy” subject for businesses that have a web-dependent activity (e-commerce, contact platforms, downloads, etc.). Thus, by monitoring customer experience, monitoring makes it possible to quickly identify the factors that could push Internet users to abandon their browsing or their purchase. It’s therefore essential to be able to collect user experience data in almost real time.
Know what to do with the collected data (to avoid the compulsive storage syndrome)
Once the data has been collected, you need to know how to use it. With the advent of Big Data, everyone’s in a hurry to collect (almost compulsively) before even asking: “What for? “. Now, this is an essential question in a monitoring project because the collected data has to be re-processed and can be shared with users who don’t fluently speak “I&O (Infrastructure & Operations)”.
What will the collected data be used for? First of all, you’re going to need to present it on dashboards, graphs, maps and other workflows adapted to the different final user profiles (IT department, IT production managers and teams, Business, etc.).
Then, the data will need to be aggregated so that a business and application view can be given of technical indicators.
You also need to be capable of generating different forms of alarm (visual, displayed, sent by email, etc.). This means anticipating the use of your data with the ITSM to create tickets automatically and manage on-call issues.
And, of course, you’ll need to be able to store your data so that it can be compared, assessed and monitored over time to produce diagnosis and availability, capacity and consumption reports.
Identify who will use the data (and their profiles) to be able to get them involved
Your data will be used by a crowd of highly varied users who will have to be trained and made aware of the new monitoring practices. Identifying them quickly will allow you to get them involved from the very start of the project. By making them more active, they will be able to appropriate the tool faster. This means AN-TI-CI-PA-TION: training should be promoted and best practices included from the very start of the project. This is a very real subject to be considered, especially if you go for an open source solution. One of the very (too) common clichés is “if it’s open source you don’t need training! “. Despite that, almost 50% of the calls to Centreon support are due to a lack of product methodology and functions knowledge.
Choose a monitoring tool (but not at the last minute to avoid switching from Sesame Street to Matrix)
Which tool should you use to collect and analyze the data? In classic, so-called “V” project cycles, the software solution is often chosen using very precise specifications and at the end of the scope definition. This often creates a big gap between the dream tool (Sesame Street version) and the tool rolled out to production (Matrix version where the tool forces us into submission). We therefore strongly recommend taking inspiration from agile methods and choosing your monitoring tool before having finished defining the project scope. This will allow you to set up PoCs (Proof of Concepts), to use the tool during the specifications, and to be able to include the specificities of the chosen tool (strengths and weaknesses) in the definition of needs and the specifications.
So don’t hesitate to choose a tool using a pre-definition of scope, to be able to refine the needs but also involve the stakeholders.
To achieve this, ask the right questions about:
- The software itself: is it inter-operable with the existing IS, is it modular, and can it be completed later with additional value-added modules? Is it easy to maintain? Is it reliable, robust and, especially, adaptable? How much does it cost and how is it sold. What is its economic model (free, paying, hybrid, purchase of licenses and annual support, all-inclusive annual subscription)?
- The software publisher: do they have quality, reactive and accessible technical support? Is complete documentation available? Are the teams attentive to their clients? What reputation does it have (flexibility, rigor, innovation, etc.)?
- Expertise: How expert am I in monitoring? Could I benefit from a skill transfer? Are profiles available on the market for this tool? How can the tool enable me to drive and manage change?
Define the sourcing policy to spread the roles between outsourcing and the internal team
It’s also important to know whether you need to call on specific expertise for your project. Whether it be an external consultant to help you complete the project successfully, an expert to provide their experience of similar projects, or technical resources from the publisher, you also need to consider “who does what” so that you can correctly size your teams and resources.