2.1 Service Administration
A documented definition of, and agreement on, required IT services and service levels must be established between IT management and organization customers. This process should include monitoring and timely reporting to stakeholders on service level accomplishments, which enables alignment between IT services and the related business requirements.
Portfolio management includes the demand and resource allocation across all services, programs, and projects, including those to support your own internal services and projects. Programs and projects exist either to create a new service; to expand, enhance, improve (e.g., to reduce risk or cost per planning unit or to add features); or to retire a service, a phase that is often forgotten.
Service levels must be periodically re-evaluated to ensure alignment of IT and business objectives. All service level management processes should be subject to continuous improvement. Customer satisfaction levels should be regularly monitored and managed. Expected service levels must reflect strategic goals of the organization and be evaluated against industry norms. IT management must have the resources and accountability afforded by the institution to meet service level targets, and senior management should monitor performance metrics as part of a continuous improvement process.
2.1.1 Service Level Management Framework
A framework that provides a formalized service level management process between customers and the service provider must be defined. This framework should maintain continuous alignment with business requirements and priorities, and facilitate common understanding between customers and service providers. The framework should also define the organizational structure for service level management, covering the roles, tasks, and responsibilities of internal and external service providers and customers.
The framework should include processes for creating service requirements, service definitions, and funding sources, as well as documentation such as Service Level Agreements (SLAs) and Operating Level Agreements (OLAs).
Specified service level performance criteria should be continuously monitored, and reports on the achievements of service levels should be provided in a format that is meaningful to stakeholders. The monitoring statistics should be analyzed and acted upon to identify positive and negative trends for individual and overall services provided.
SLAs and their associated contracts, if applicable, with internal and external service providers should be regularly reviewed to ensure that they are effective and up-to-date, and that changes in requirements have been taken into account.
2.1.2 Definition of IT Services
Definitions of IT services should be based on service characteristics and business requirements. These definitions should be organized and stored centrally.
2.1.3 Service Support
Service Support must focus on the IT end user, ensuring that they have access to the appropriate IT services to perform their business functions. Effective service support management requires the identification and classification, root cause analysis, and resolution of issues. This process also includes the formulation of recommendations for improvement, maintenance of issue records, and review of the status of corrective actions.
An effective service support management process maximizes system availability, improves service levels, reduces costs, and improves customer convenience and satisfaction.
This process should include setting up a service desk/service request function with registration, issue escalation, trend and root cause analysis, and resolution, which leads to increased productivity through quick resolution of user issues. In addition, root causes of issues, such as poor user training, can be identified through effective reporting and addressed.
184.108.40.206 Service Desk/Service Request Function
A service desk or service request function, which is the end user interface with IT, should be established to register, communicate, analyze, and route all customer service requests, reported issues, service requests, and information requests. It should be the single point-of-contact for all end user issues. Its first function should be to create a “ticket” in an issue tracking system that will allow logging and tracking of service support requests.
Issues must be classified according to type and business and service priority. There must be monitoring and escalation procedures based on agreed-upon service levels relative to the appropriate SLA that allow classification and prioritization of any service support requests as an incident, problem, service request, information request, etc.
Once an issue has been logged, an attempt should be made to solve the issue at this level. If the issue cannot be resolved at this level, then it should be passed to a second or third level within the issue tracking system and routed to the appropriate personnel for analysis and resolution where necessary. The service desk or service request function should work closely with related processes such as change management, release management, and configuration management.
Customers must be kept informed of the status of their requests. The function must also include a way to measure the end user’s satisfaction with the quality of the service support and IT services.
As a goal, the service desk/service request function should be established and well organized, and take on a customer service orientation by being knowledgeable, customer-focused and helpful. Advice should be consistent, and incidents are resolved quickly within a structured escalation process. Extensive, comprehensive FAQs should be an integral part of the knowledge base, with tools in place to enable a user to self-diagnose and resolve issues.
Metrics must be systematically measured and reported. Management should use an integrated tool for performance statistics of the service desk/service request function. Processes should be refined to the level of best industry practices, based on the results of analyzing performance indicators, continuous improvement and benchmarking with other organizations.
220.127.116.11 Classification of Issues
Processes to classify issues that have been identified and reported by end users must be implemented in order to determine category, impact, urgency, and priority. Issues should be identified as incidents or problems, and be categorized into related groups, such as hardware, software, etc., as appropriate. These groups may match the organizational responsibilities of the end user/customer base, and should be the basis for allocating problems to the IT support staff.
Note that incident management differs from problem management. The purpose of incident management is to return the service to normal level as soon as possible with smallest possible business impact, while the principal purpose of problem management is to find and resolve the root cause of a problem and thus prevent further incidents.
An incident is any event that is not part of the standard operation of the service and causes, or may cause, an interruption or a reduction of the quality of the service. Incident Management aims to restore normal service operation as quickly as possible and minimize the adverse effect on business operations, thus ensuring that the best possible levels of service, quality and availability are maintained. Normal service operation is defined here as service operation within SLA limits.
The objective of incident management is to restore normal operations as quickly as possible with the least possible impact on either the business or the user, at a cost-effective price.
A problem is a condition often identified as a result of multiple incidents that exhibit common symptoms. Problems can also be identified from a single significant incident, indicative of a single error, for which the cause is unknown, but for which the impact is significant. Problem Management aims to resolve the root causes of incidents and thus to minimize the adverse impact of incidents and problems that are caused by errors within the IT infrastructure, and to prevent recurrence of incidents related to these errors.
The objective of problem management is to reduce the number and severity of incidents, and report findings in documentation that is available for the first-line and second-line of the service desk/service request function.
18.104.22.168 Tracking of Issues
The issue management process must provide for adequate audit trail capabilities that allow for tracking, analyzing, and determining the root cause of all reported issues considering:
- All outstanding issues;
- All associated configuration items;
- Known and suspected issues and errors; and,
- Tracking of issue trends.
The process should be able to identify and initiate sustainable solutions to reported issues that address the root cause, raising change requests via the established change management process where needed. Throughout the resolution process, regular reports should be received on the progress of resolving reported issues, and the continuing impact of reported issues on end user services and against established SLAs should be monitored.
In the event that this impact becomes severe or reaches established SLA thresholds, the issue management process must escalate the problem.
22.214.171.124 Escalation of Issues
Service desk/service request function procedures must be established so that issues that cannot be resolved immediately are appropriately escalated according to the guidelines established in the SLAs, and workarounds should be provided if appropriate. These procedures should ensure that issue ownership and life cycle monitoring remain with the service desk for all user issues, regardless of which IT group is working on the resolutions.
126.96.36.199 Resolution and Closure of Issues
Procedures must be put in place to close issues either after confirmation of successful resolution of the issue, or after agreement on how to alternatively handle the issue. When an issue has been resolved, these procedures should ensure that the service desk records the resolution steps and confirm that customer agrees with the action taken. Unresolved issues should be recorded and reported to provide information for the timely monitoring and clearance of such issues.
188.8.131.52 Reporting and Analysis
The issue management system must be able to produce reports of service desk activity so that management can measure service performance and service response times, as well as identify trends or recurring issues, so that service can be continually improved.
Establishing an effective service support process requires well-defined monitoring procedures, including self-assessments and third-party reviews. These procedures should allow continuous monitoring and benchmarking to improve the customer service environment and framework to meet organizational objectives. Remedial actions arising from these assessments and reviews should be identified, initiated, implemented, and tracked.
The need for metrics is driven by the desire to deliver and demonstrate high quality service. The type of metrics collected is driven by the business and IT requirements for service reporting and Key Performance Indicators (KPIs). Ultimately, metrics collection and aggregation provide input into key business decisions such as how to equitably allocate costs associated with IT.
Service metrics represent the KPIs of an IT service. They should be based on measurable attributes of the associated process, network, system, application, server, or storage components that support the service. For example, the availability of a service may be dependent on the combined availability of various underlying components as well as a minimum volume of transactions processed by an application. The basic requirement of any collected metric is that it be derived from performance and availability attributes of the specified target. Extended metrics will rely on more sophisticated attributes related to resource usage, transactions, and process efficiency. Still other metrics specify indicators that are more representative of business processes and operations.
The technical infrastructure required to measure and collect metric data varies widely depending on the characteristics of the metrics and the availability of supporting data. There are dependencies on how the measured resource is instrumented and how the information can be collected. The complexity, effort, and “cost of collection” required to maintain such an infrastructure in a dynamic environment is another important element.
Use of standards, best practices, and effective integration are important considerations for successful and maintainable IT service metering. To reduce the overhead associated with common data collection implementations that use proprietary agents, IT service metrics should be based on agents with mechanisms supplied by applications and operating systems vendors, or with agents based on standards.
This non-proprietary approach helps minimize support overhead as well as speed deployment since it reduces much of the upfront architectural planning and detailed configuration efforts.
IT service benchmarking defines a strategic management method that compares the performance of one IT service provider with the IT services of other institutions or organizations. Performance means both efficiency and effectiveness criteria. The comparison can be carried out within one institution or organization, but also on a system-wide basis.
The objective of IT benchmarking is to identify optimization potentials and extrapolate recommendations on how performance could be improved. The benchmark is the so-called “best practice.” This means that the institution or its processes provided by the IT service in question largely meets the defined efficiency and effectiveness criteria is the best.
A typical benchmarking procedure may include, but is not limited to:
- Identifying efficiency and effectiveness criteria that serve as comparative factors and asking how IT services within an operative process should be changing;
- Finding (internal) benchmarking partners and (external) partners/donors, in order to set up a comparative platform, with each partner being prepared to share the necessary information;
- Setting up a key data system by taking the comparability into account, with a clear and definition-based boundary in order to ensure a fair comparative platform;
- Analyzing the database and identifying the best-practice participants and defining the target benchmark;
- Identifying optimization potentials and guidelines by comparison with the best practice;
- Calculating theoretical savings potentials (gap to benchmark);
- Extrapolating objectives in order to close the gap to best practice;
- Setting up an implementation plan; and,
- Controlling results and improvements.