05 March 2023

ISO 17025

 1.    Introduction

 1.1  History and status of ISO 17025:2005

Prior to the issuing of ISO 17025:1999 there was no internationally accepted standard for laboratory quality systems that could provide a globally accepted bases for accreditation. Accreditation was based on national standards. However, there was considerable level of uniformity between the requirements expressed in these various standards due to the existence of ISO Guide 25, a document drawn up by the ISO Council Committee on Conformity Assessment (CASCO) in response to a request by the International Laboratory Accreditation Cooperation (ILAC) held in Auckland, New Zealand, in October 1998.

The declared purpose of ISO Guide 25, taken from its foreword, is to establish the principle that “third party certification systems (for laboratories) should, to the extent possible, be based on internationally agreed standards and procedures.” ISO guides are intended to be used by local standards institutions when preparing their own national standards. By this means, it is hoped to achieve a high degree of compatibility between standards prepared in different countries “so as to facilitate bilateral and multilateral agreements.” (Quotations from the foreword to ISO Guide 25, 3rd Edition).

The document now known as ISO 17025 began life as a revision of the third edition of ISO Guide 25, but during the revision process it was decided to convert the guide to a standard, so providing a truly global basis for accreditation. It was also decided to introduce as much compatibility as possible between ISO 17025 and the generic quality management system standard ISO 9001, which was also under revision at the same time. The objective appears to have been to create a logical connection between ISO 9001 and ISO 17025 such that the former would be seen as a master standard with ISO 17025 being a specific application of that standard to testing and calibration laboratories.

ISO 17023:1999 was accepted by ISO subscribing countries in late 1999 and came into effective use during the first quarter of 2000 after its adoption as a national standard by most countries around the world. The new version of ISO 9001, the 2000 edition, was accepted later.

The exercise intended to harmonize ISO 17025 and ISO 9001 was, in the event, regarded as imperfect, especially in that ISO 9001 placed great emphasis on continual improvement in the quality system. Although this was included in ISO 17025, its importance as a part of the standard was not strongly emphasized. Hence a revision of ISO 17025 was undertaken, and this led to ISO 17025:2005 which was adopted as an ISO standard in late May of 2005.

There a no fundamental differences between ISO 17025:1999 and ISO 17025:2005 and nothing which impinges essentially on the technical requirements. The main differences can be summed up as follows:

 ·       Insistence on a demonstrated commitment to continually improve the quality management system and identified mechanisms for achieving this.

·       Greater emphasis of the need to communicate with customers and, especially, to actively solicit feedback on service quality and ensure the resulting information is used as the basis of action to improve the management system.

The transitional period between ISO 17025:1999 and ISO 17025:2005 lasted two years, with the two standards running together. In May 2007 ISO 17025:1999 became defunct and existing laboratories who had not been assessed against the 2005 version ceased to be accredited. 

1.2  International Recognition of Accreditation

Accreditation of laboratories is generally performed by national accreditation bodies. The primary function of such bodies is, of course, to provide assessment of laboratories in their respective countries against ISO 17025. However, they will also often respond to requests to carry out assessments in other countries, especially if the requesting laboratory is in a country without its own national accreditation body. Where there is a national accreditation body in the country and a laboratory seeks to use a body from another country, the incoming accreditation body will normally, as a matter or courtesy, seek approval from the resident body before operating in the country. 

A laboratory may prefer to use an accreditation body other than its domestic organization when the latter has either no international recognition or where it lacks recognition in parts of the world relevant to the laboratory’s operations.

International recognition of accreditation awarded by national bodies is based on the conclusion of Mutual Recognition Agreements (MRAs) between national bodies. The mechanism is that the bodies seeking to agree to recognize each other’s accreditations will audit each other’s operations against ISO 17011: Conformity Assessment – General requirements for accreditation bodies accrediting conformity assessment bodies. This is the international standard to which assessment bodies are expected to adhere.

MRAs may be multilateral (i.e., involving more than two bodies) but even so they can be rather cumbersome, and it can take many years for a new national body to establish significant international recognition. However, this rather cumbersome process is being rapidly streamlined by means of regional laboratory accreditation conferences linked through ILAC (see section 1.1). the regional body applies rules for membership, including compliance with ISO 17011, and audits national bodies for compliance. Mutual recognition is then organized between the regional bodies, so simplifying the whole system and shortening the timescale.

Key regional groupings are the Asia Pacific Accreditation Cooperation (APLAC), the European Cooperation for Accreditation of Laboratories (EAL) and the Southern Africa Development Community in Accreditation (SADCA).

This regionalization of international recognition is developing rapidly but has not been fully established, and it is still possible to find accreditation bodies who are members of the regional groups who prefer to pick and choose which of the other members they will recognize.

In this context of national accreditation bodies, and mutual recognition it must be noted that not all countries choose to establish their own domestic accreditation service. In the case of smaller countries with only limited number of laboratories, this may make little economic sense. In such instances the country uses other national bodies, either based on an agreement with a particular body to provide the service to the country or on an ad hoc basic where each laboratory chooses its own accreditation service.

Currently, attempts are being made to develop arrangements where several countries club together to have a single regional accreditation body. It seems likely that initiatives of this type will bear fruit in Southern Africa.

The recent World Trade Organization initiatives to deal with technical barriers to trade sought to address the question of global acceptance of test data as part of quality issues in international trade. The agreement can be summarized as a recognition that, when deciding whether data from a particular laboratory is acceptable, a key criterion should be compliance of the laboratory with ISO 17025. This has been widely interpreted as meaning that countries all need to establish a national ISO 17025 accreditation body for laboratories. However, in this context, the following points need to be noted:

·       Accreditation must have credibility, which means, in practice that the accreditation body must apply ISO 17025 rigorously and itself operate to ISO 17011. Simply having an accreditation body is not a solution.

·       The credibility of the accreditation body on an international scale needs MRAs, especially with countries which are recipients of a trade goods needing testing support.

·       The credibility of individual laboratories can be established by their direct assessment by either customers or accreditation bodies from trading partners. A national accreditation body is not essential. 

1.3  Selection of a suitable accreditation body by laboratories

As should be clear from the foregoing discussion, the key issue in selecting an accreditation body is to ensure that it has recognition in the context in which the laboratory’s data need to be used. Where a laboratory operates purely in a domestic market and where the data is used only within the country, for example for local food safety or environmental protection, then a national accreditation body, even one with no international recognition, will normally be entirely suitable.

However, if the laboratory is servicing exporters who need to present its data internationally, it is critical that the accreditation body is recognized by importing countries. Hence the laboratory needs to establish the range of MRAs held by the accreditation body and especially which other countries, other than the home country of the accreditation body, will recognize accreditation awarded by it.

1.4  Scope of Accreditation

Although ISO 17025 is written as though all the methods used by a laboratory are covered by the standard and hence included in any accreditation against the standard, this is rarely the case. In practice, any laboratory will have only some of its methods accredited and perhaps not even the majority. Despite this, an accreditation body will often make a formal statement to the effect that it expects to see comprehensively operating quality system. However, any assessment will focus on the scope of methods and on the equipment used to deliver them and take little interest outside this scope. In this sense the term, “accredited laboratory” is inaccurate. We should rather talk of a laboratory accredited for a specific list of methods.

The laboratory will need to select methods to offer as part of the scope when it makes its application for accreditation. The following criteria should be born in mind:

·       Methods which are performed infrequently, for example less than 12 times per annum, are difficult to accredit since it is impossible to demonstrate a track record of performance. If such methods must be included in the accreditation, a large level of quality control will be required by the assessors. 

·       Methods with little objective content are unlikely to be able to be accredited since consistency in application cannot be guaranteed. 

·       Commercial laboratories should select methods on a purely commercial basis. If there is no commercial advantage in accreditation of the method, then the cost and effort may not provide a return.

·       In many countries data generated for environmental, food safety or legal reasons must be covered by accreditation to be acceptable.

Overall, the laboratory should select a scope of methods which includes those it performs routinely and those where either commercial or legal issues make accreditation advantageous.

Accreditation bodies differ in precisely how they define scope, and some will allow a more generic definition in some area of activity. In such instances they will assess the laboratory for a particular application of a method plus a procedure to be followed when extending the method to other areas. For example, a laboratory may have accreditation or trace metals in certain types of foods with a procedure for how the recovery checks will be done and evaluated if the method is applied to a hitherto untried matrix. 

1.5  Relationship between ISO 17025 and ISO 9001

As discussed above ISO 9001 is the general standard which specifies the requirements for a quality management system. Laboratories which meet the requirements of ISO 17025 also operate in accordance with the requirements of ISO 9001 that are relevant to calibration and testing activities.

What this means in practice is that an organization which holds ISO 9001 certification may use a laboratory accredited against ISO 17025 as a supplier of test data without the need to carry out its own audit of the laboratory’s quality system.

The question often arises of whether laboratories should be accredited/certified to ISO 9001 or to ISO 17025. In general, it is agreed that the appropriate accreditation for commercial testing and calibration laboratories is to ISO 17025. As a result of agreements with laboratory accreditation bodies may ISO 9001 certification bodies will not allow their certification to be cited for commercial testing or calibration laboratories in support of their services.

What this means is that it you are an ISO 9001 certified organization with in–house laboratory which forms part of your quality control system, the laboratory will be included in the ISO 9001 external audit. However, if you then want to sell the services of that laboratory to outsiders as a testing service you cannot advertise it as an ISO 9001 accredited/certified laboratory. You would need to obtain accreditation to ISO 17025.

It is not uncommon, however, for organization with laboratories used purely for internal quality control purposes to seek to accredit the laboratory to ISO 17025. This is generally done to enhance the laboratories, and hence the overall quality control systems, credibility or as part of the application of an ISO 9001–compliant system.

ISO 9001 external auditors will not usually do a detailed audit of such an internal laboratory if it holds a current ISO 17025 compliant accreditation. The quality system in the laboratory is largely taken for granted for ISO 9001 purposes. Since laboratory accreditation procedures leading to ISO 17025 accreditation are explicitly designed for laboratories, they can be easier to interpret for the laboratory as opposed to the rather more diffuse requirements of ISO 9001, which are designed for a more general context. The other advantage of accrediting an internal quality control laboratory is that it will generally reduce the number of audits by customers, and this is often a key reason for seeking accreditation. Frequent audits by a range of customers can be disruptive to operations.

There are certainly a few significant omissions from ISO 9001 as compared to ISO 17025 although, as already discussed, there is a general ISO move to bring the standards closer together. The additional requirements in ISO 17025, as opposed to ISO 9001, include participation in Proficiency Testing, adherence to documented, validated, methodology and specification of technical competence, especially on the part of senior laboratory personnel. There is also a difference in the method of scrutiny of laboratories under ISO 9001 as compared to ISO 17025 assessment.

ISO 17025 assessment bodies will always use technical assessors who are specialists and who carry out a peer review of the methods being used by the laboratory and the way in which those methods are applied. An ISO 9001 external audit to determine suitability for certification does not include this peer review of technical aspects and the auditors are not required to be technical specialists. They confine their attention to the quality management system.

From the point of view of a laboratory’s clients, laboratories meeting the requirements of ISO 17025 fulfil all the relevant requirements of ISO 9001 when acting as subcontractors. The practical effect of this is that if an organization which is certified to ISO 9001 is using an ISO 17025 accredited laboratory as a subcontractor, it can treat it as an ISO 9001 certified subcontractor for any work within the laboratory’s scope of ISO 17025 accreditation. There will, for example, be no necessity to carry out quality audits of the subcontractors.

1.6  Summary

The general view to be taken of these various guides and standards is that ISO 9001 is the overall standard for quality management systems and ISO 17025 provides specific guidance on the application of the ISO 9001 principles to laboratories. This correspondence is becoming increasingly apparent with the development of both standards, especially as the language and terminology is converging.

 

When seeking to select or establish an accreditation body for laboratories, the key standard is ISO 17011 which is the basis on which international acceptance of an accreditation body, and hence its client laboratories, I achieved.

 

2.    Organization and Management

2.1  General Points

It is extremely unlikely that any properly constituted laboratory will need to change its management structure in any fundamental way to comply with the requirements of ISO 17025. The management system will, however, must be formally described in the quality manual and shown on an organization chart. In the case of each level of management or individual post, there must be job descriptions to describe the responsibilities to be discharged and the authority given, plus the supervisory responsibilities of each grade. The usual practice is to include key job descriptions in the quality manual, for example Laboratory Manager (or equivalent), Quality Manager or senior staff with specific responsibilities, but to retain other job descriptions in a separate file or in staffed record files.

Be sure that responsibilities and authority match each other at all levels. There is no point in stating that someone has the responsibility for organizing interlaboratory calibrations, for example if they do not have the authority to require staff to do the necessary work and to sanction appropriate expenditure.

The following sections give guidance on the important points to be considered when defining the management structure.

2.2  Legal identity of Laboratory

You will need to describe the precise status of the laboratory. Typical examples might be:

·       An independent commercial testing laboratory carrying out measurements for clients in return for a fee. 

·       A laboratory which serves a regulatory authority and provides data to that authority for enforcement purposes.

·       A laboratory which is part of a bigger organization, and which provides an internal service solely within an organization. 

The latter would include company quality assurance laboratories and laboratories providing in–house environmental control compliance monitoring.

You will have to prepare what you feel is a correct description of your laboratory. The description should also include a statement of the ownership of the laboratory and its relationship to any parent or subsidiary organizations. This relationship should be shown on a chart.

ISO 17025 includes an explicit requirement that, where the laboratory is part of an organization which performs activities other than testing and/or calibration, the responsibilities of all staff in the organization who have influence on the testing/calibration work are defined to identify potential conflicts of interest. This clause seeks to ensure that the organization thinks very clearly about any potential conflicts of interest, and presumable, seeks to minimise or eradicate them. A clear definition of authority and responsibilities in the quality manual can certainly contribute here. The sort of policy statement necessary would be one which places a responsibility on the laboratory to generate and report data objectively. This would then back up by a statement that no staff have authority to take any action to require any action to be taken which interferes with the laboratory in discharging this responsibility, irrespective of the normal line of management. Since the quality manual is endorsed from the highest level in the organization, the laboratory staff may then rely upon this stated policy to protect them from any undue influences.

2.3  Body allocating resources.

In your description of the management structure, it is essential that you identify the body in the organization which makes decisions on policy and allocate resources. This will normally be the Board of Directors or an equivalent body with financial control.

This body must endorse the quality policy statement which appears in the quality manual and must express a commitment to the provision of resources to implement and maintain the quality system. The quality policy and the quality manual should normally be issued on the authority of the Chief Executive of the organization.

2.4  Technical Management

There is no requirement in ISO 17025 to identify a specific technical manager, but the technical management structure must be specified. This must make clear who is responsible for technical management and the scope of the responsibilities. For example, if different areas have different managers, this needs to be specified and their range of responsibility clearly defined. It is generally expected that in any specific laboratory there will be a distinct Laboratory Manager, but in larger organizations with several technically distinct laboratories there may be several Laboratory Managers with specific technical briefs and with no overall defined Technical Manager.

The Technical Manager must be clearly distinct from the role of the Quality Manager and the quality management structure, but the technical management still has an obligation to ensure that technical activities are conducted in accordance with the requirements of the quality system. As discussed in Section 2.5, although the Quality Manager has overall responsibility for quality, this is discharged through monitoring and advising and not by managing quality control and assurance of technical activities directly.

The description of the technical management structure must also make clear how supervision arrangement work. Typical laboratory structure includes technical staff and professional staff. The general division of labour is that professional staff have responsibility for method selection, for development of new methods and for interpretation of data, whereas the actual bench work is done by technical staff, although professional staff may also be involved. Overall, the professional staff are responsible for the ultimate quality of the data so the structure must show how they discharge this responsibility by supervision of the technical staff. This does not necessarily mean direct supervision but will typically involve explaining how instructions are passed own to the bench and how data is passed back and checked.

A key component of this responsibility is likely to come with the involvement of professional staff in interaction with customers of the laboratory through the process of review of requests, contracts, and tenders – see Section 7.3.

2.5  Quality Manager

This is the only post required to be defined by ISO 17025. You do not have to use the actual title, but you must identify the person responsible for the functions.

The Quality Manager is responsible for implementing and operating the quality system on a day–to–day basis. He or she is normally responsible for administering the controlled document system, for compiling the quality manual and for organizing the review and audit of the Quality system – see Section 4.

Much of the work required of the Quality Manager is administrative in nature but he or she is also, in the last analysis, responsible for the effective enforcement of the quality policy. He or she is also expected to advise management on quality issues by virtue of his or her close familiarity with the standard and the organization’s quality management system.

The Quality Manager must have direct access to the highest level of management in the organization, i.e., the body in Section 2.3 above, and to the Laboratory Technical management. An accreditation body will generally regard the Quality Manager as the person who provides day–to–day guardianship of the quality standard and so represents their interests within the organization.

The post of Quality Manager may be filled in several ways depending upon the organization.

2.5.1       Full time Quality Manager 

Few laboratories can support full time Quality Manager, and it is doubtful whether the post requires the full-time commitment of one person in laboratories with fewer than 100 staff.

In large organizations which have several quality systems running, for example ISO 17025 and ISO 9001, it may be relevant to have an overall Quality Manager covering all systems, in which case a full time Quality Manager may make sense. This can work well but only if the Quality Manager has some laboratory background. Although, in theory, managing the quality system does not need technical expertise, in practice it is very difficult for a Quality Manager with no relevant technical insight to operate with laboratories.

One system which can be effective is to have an overall, possibly full time, Quality Manager plus quality representative at laboratory level. These can be individuals who are involved in laboratory management but who are given a specific brief to manage quality. They provide local quality expertise and administrative support to the quality system and assist the Quality Manager. Clearly, in this arrangement the quality representatives may have dual reporting line: the normal lines to the Laboratory Manager on technical issues and a line to the Quality Manager on quality issues.

2.5.2       Using a Senior Scientist as Quality Manager

This type of structure would use one of the Senior Staff from the tier below that of Laboratory Manager to manage the quality system. This, of course, means that the Quality Manager occupies a line management position below that of Laboratory Manager. This should represent no problems if it is stated in the documentation that, on matters of quality, the Quality Manager has direct access to the level of management described in Section 2.3 above. This then provides the Quality Manager with a line of action in the unlikely event that the Laboratory Manager is contravening the quality policy and attempts to subvert the Quality Manager by using the line management authority.

This type of structure operates effectively in many organizations and is probably the commonest scenario found in medium sized laboratories. There is often some initial discomfort with such an arrangement in highly hierarchical organizations but is usually found to be workable in practice.

2.5.3       Other possibilities for Quality Manager

Alternatives to the scenarios already discussed include the use of a staff member who is not involved in the laboratory work but who has the necessary technical background. Such person is typically found amongst individuals who have been promoted into managerial posts from the laboratory.

The senior management tier described in Section 2.3 may contain such individuals. The use of such a person not only provides an independent form of quality management but underlines the commitment of the highest level of management of quality.

In small organization it may be difficult to separate the quality management and technical management functions completely, and some laboratory mangers may function as their own Quality Managers. This is not disallowed by the standard so long as the responsibilities are clearly defined. Generally, assessors are understanding on this issue in small laboratories and will recognize genuine attempts to achieve the necessary separation within local limitations. 

2.6  Deputies

There should be provision for deputies for all key posts, especially those of Quality Manager and Laboratory Manager, so that their functions can still be discharged in their absence.

A common scenario is for Laboratory Manager and Quality Manager to deputize for each other where the Quality Manager has appropriate technical expertise, but you can make whatever arrangements are suitable for your situation. The point to concentrate on is that you must create a structure such that the laboratory is never going to be paralyzed because it is not clear who can give an authorization or perform a function the absence of a principal.

ISO 17025 recognizes that in small organizations it may not be practicable to have designated deputies for all posts. This does not alter the fact, however, that if the laboratory does not have suitable arrangements for decision making in the absence of key staff is a danger of running into a situation where work may have to stop if a non–conformance with the quality system is to be avoided.

Look at your quality manual and analyze the responsibilities allocated, especially with authorizing activities. Consider the implications of the person to whom the responsibility is allocated being absent. Would this create a problem in operating? If the answer is yes, then decide to cover the absence by showing where the responsibility is reallocated.

2.7  Other Posts

Apart from the requirements to define the Quality Manager and the technical management structure, ISO 17025 requires no other post to be defined. You should, however, look at your situation and describe how it is structured.

For each level you should have an appropriate name, for example, Chief Chemist, Senior Microbiologist, Materials Scientist, Technical Officer, Laboratory Assistant, or whatever titles your organization is familiar with.

You should define, for each level, the reporting structure both going upwards and downwards, for example each chemist reports to a Senior Chemist and Technical Officers report to a Materials Scientist.

2.8  Qualifications and Job Descriptions

For each of the levels which you have described under section 2.7, you should decide upon and record in the quality manual the qualifications and experience necessary to fill the post. There should be similar specific descriptions for the technical management posts and the post of Quality Manager.

From quality point of view these descriptions represent a commitment to provide staff of defined capabilities at appropriate levels in the organization and to take suitably qualified and experienced staff to replace those leaving. You should be careful not make the criteria too restrictive but do choose criteria which demonstrate a genuine commitment to have properly trained and qualified staff. Do not fall into the trap of listing the qualifications and experience of the present incumbents. The chances of your finding a perfect clone to replace them are remote. Think in terms of the minimum necessary qualifications and experience for the post and allow for using someone with appropriate experience and perhaps minimal qualifications.

The description of the post should include a brief job outline describing the duties to be carried out and the level of responsibility to be accepted, including responsibility for supervision.

The minimum content of the job descriptions should cover responsibilities for performing tests/calibrations; planning of tests/calibrations and evaluation of results; method development and modification, and validation of new methods; expertise and experience; qualifications and training programmes; and managerial duties.

The list below provides a useful method which effectively aligns staff, for technical operational purposes into several levels. Any individual may, of course, operate at more than one level.

·       Those providing support and who never take responsibility for any data. 

·       Those who carry out routine work; such staff do not evaluate the data for release but will normally be expected to do any initial checks required precisely defined quality control criteria.

·       Staff who exercise professional judgements and evaluate data – normally those who can take responsibility for the release of data.

·       Staff responsible for training and evaluating the expertise of trainees.

·       Staff responsible for selecting and validating methods.

2.9  Influences, Confidentiality, and Independence

The organization and management of the laboratory must be such that the scientific staff are completely free to exercise their professional scientific judgement. There must be no commercial or financial pressures which might influence the quality of the work. This does not mean that you cannot insist on a reasonable work rate from your staff, but it would not, for example, be acceptable to pay them based on the number of samples analyzed.

All staff must be instructed that they are required to keep confidential anything which they may learn because of their work and any information which they are given to help them to carry out their duties. The quality manual should contain a statement to this effect.

Some organizations require staff to sign a Confidentiality Agreement. This is not essential but is generally recommended.

There should be instructions to staff on the course of action to be taken if they believe that an attempt has been made to subvert their judgement. The general instruction should be that the staff member must inform the management as soon as possible and the management should arrange to have the work in question passed to another employee, wherever possible. It is emphasized that this is not done as a reflection on the integrity of the staff member concerned but is required in defense of the laboratory’s reputation and to remove any possibility of a suggestion that the approach might have been successful.


3.  The Quality System and its components

3.1  Key Questions

For any particular result from your laboratory, can you produce documentary evidence to demonstrate:

·       The work was done by a properly qualified person who had been trained in the relevant technical operations and had access to all information necessary to the proper execution of the work? 

·       The method used was technically sound and appropriate to the sample and to the requirements of the client. 

·       The equipment used was properly maintained and calibrated at the time the data was generated?

·       All appropriate quality control checks were carried out and the results of such controls fell within the specifications. 

With respect to your management and organization:

·       Are there clearly assigned responsibilities for management of quality? 

·       Are procedures in place specifically designed to ensure quality is maintained? 

·       Are there procedures documented?

·       Do you have a mechanism for approval of this documentation and for ensuring that all copies issued are kept up to date?

·       Do you monitor the arrangement for managing quality to ensure they are fully implemented and being followed by all?

·       Where quality problems occur do you have a mechanism for taking corrective action which seeks to develop the quality system so that a recurrence of the problem is unlikely?

·       Do you have a mechanism for identifying potential quality problems and taking action to prevent them from materializing? 

·       Do you have a mechanism for identifying opportunities to improve the effectiveness of your quality system? 

3.2  Purpose of a formal quality system

A professional will always take precautions to secure the integrity of the data generated by his or her laboratory. These will typically include calibration of instruments, checks on the calibration, testing of samples in duplicate and the testing of quality control (QC) samples for which the expected results are known.

Most of these operations come under the heading of quality control. That is, they are designed to check that nothing has gone wrong. A formal quality system will certainly include quality control procedures, but it is much more oriented towards quality assurance. By quality assurance we mean procedures and management methods which are designed to minimize the chances of anything going wrong in the first place. The emphasis is on error prevention rather than on error detection.

In the laboratory, the quality system must not only deliver quality data but must also provide for the maintenance of records which enable the quality of any result to be demonstrated historically. The laboratory which can produce such supporting evidence will, without doubt, have far more credibility than a competitor who cannot.

Having said all this, we must not lose sight of the fact that the purpose of the quality management system is to maintain and, where necessary, improve quality. Any records and documentation must help towards this end.

What we seek to do by formalizing the quality management system and its associated documentation and records is to ensure that quality management is applied comprehensively, appropriately, and consistently. We also seek to establish an audit trail such that if something should go wrong, we can tract the error and make modifications to the systems that will reduce the likelihood of a recurrence, i.e., implement corrective action which addresses the root cause of the problem.

3.3  Elements of the Quality System

Management of quality should be simply one facet of each aspect of managing the laboratory. However, quality management is so vital that it is usual to identify a separate quality management structure. This will define clearly how responsibility and authority for dealing with problems of quality are allocated in the laboratory. This management structure is obviously the key element of the quality system. This is the mechanism for doing things.

3.3.1       Quality Documentation 

Documentation is important but it is critical to realize that it is not the quality system. Because of the major obstacle that you see when you decide to adopt a formal quality system is the production of the documentation, it is easy to fall into this trap.

The documentation is simply one of the tools of the system and has two main roles:

a.     It is a mechanism for defining the quality system so that there is a consistent and secure basis for monitoring the system; in short, unless the system is clearly specified it is difficult to monitor whether it is being used.

b.     It is a means of communication within the laboratory so that all staff know their responsibilities and the procedures to be followed. 

The key piece of quality documentation is the quality manual. This is the document which describes in detail the policy on quality and the quality management structure and describes or refers to the procedures which constitute the working quality system. The quality manual is, typically, prepared and checked by laboratory management, usually under the overall coordination of the Quality Manager. It should, however, be formally authorized for issuing from as high a point in the management hierarchy as possible; Chief Executive, Director General, Chairman are typical points. This ensure that the manual has the strongest authority and shows, to the accreditation body, a commitment on the part of the senior management to the quality system.

It is critical that the Quality Manual is seen to be a working document. It should be available to all staff and they must be instructed to read it and to use it to guide them in all aspects of their work. It will then be a vital force for the consistent and comprehensive operation of the quality system.

The Quality Manual may be completely comprehensive and self– contained or it may refer to subsidiary documents which describe procedures. Whatever the case, the Quality Manual must be the primary source for all aspects of the operation of the quality system and must either contain all necessary information or must explain clearly where such information is to be found. Guidance on the preparation of the quality manual is given in Section 16.

Laboratories will normally require documentation of technical procedures in addition to the Quality Manual. The key part of the technical procedural documentation will be the documentation of the test or calibration methods themselves. The level of detail for these methods should be such as to enable a trained practitioner to carry out tests and calibrations in a proper and consistent fashion.

The technical procedural documentation should also contain, or refer, to, operating details for all of the instrumentation used to carry out tests. These details can be provided either as part of the description of appropriate methods or as separate descriptions of operating procedures. Such descriptions may refer to instrument manufacturer’s manual or other sources of information and, provided these sources are available, the information need not be repeated in documentation prepared in–house. It may, for example, be appropriate to have general operating instructions for a spectrophotometer, which are cross–referenced by methods which require measurements with the instrument.

It is not essential for the laboratory to write up all methods and operating procedures. Where standard methods are used, then, provided these are adequately documented to enable the method to be performed properly and consistently, the requirement for a method description can be met by making available to staff a copy of the standard specification, for example, a national or international standard. See section 7.7 for further discussion of methods documentation.

Similarly, equipment operating instructions may be made available entirely in the form of manufacture’s manuals if these provide all of the information necessary.

A combination of the two approaches is often used, with in–house produced documentation being produced to refer to, amplify and clarify standard specifications and manufacturers’ manuals.

Whatever the approach, the available documentation must either contain all necessary technical information for carrying out the laboratory’s scope of tests and calibrations or must make clear where the relevant information is to be found. The emphasis must be, as already stated, such that all have a source of reference to enable them to work properly and consistently.

Consistency is particularly important to the accreditation body since it is accrediting the laboratory and not the individual staff members. An important function of the quality management system is that it should ensure this consistency, and a key feature is to start with a clear written definition of what everybody should be doing as regard both quality management and technical procedures. 

3.3.2       Quality System Records

The records kept should consist of:

a.     All original observations, raw data, calculations and derived data in the form of work sheets, notebooks, instrument output, etc.; these must be dated and should all be traceable to the person who made the observation or measurement and to the equipment used – see section 11. 

b.     Records of installation, maintenance, calibration, and checks carried out on instruments and other equipment; this should be in the form of an individual equipment log for each major item of equipment, or composite logs for smaller items, such as balances, thermometers, glassware. See section 8.

c.      Copies of all reports issued by the Laboratory. See Section 13.2

d.     Records of staff qualifications, training, and review of training in the form of a staff register. See section 5.

e.     Records of all audits and reviews of the quality system, including records of corrective and preventive action taken. See section 4

f.      Records of all customer complaints and response to non– conforming work, and details of follow–up and any corrective action taken. See Section 4.10

g.     Records of suppliers and subcontractors. See section 14 and 15.

The key objective in keeping records should be to ensure that the source of any error can be traced and that any test or calibration can be repeated in a manner as close to the original as possible.

Records must be kept in such a way that they can easily be retrieved if necessary and they must be secure, held in confidence and reasonably protected from destruction.

There should be a documented policy on the period of retention of records. ISO 17025 has no actual period specified but the laboratory must commit to a policy. Accreditation bodies usually have their own regulations, and these vary from body to body.

However, a typical requirement relates to the practice of most accreditation bodies of carrying out a full reassessment of a laboratory every four years. The normal requirement then becomes that all records for the past four–year period must be available. After the reassessment, most records for the four previous years can be disposed of. The only rider to this is that any records which are relevant to ongoing issues need to be kept for at the duration of that issue.

What this means in practice is that records relating to individual items of equipment, for example, need to be kept for however long the equipment is in use plus whatever period is necessary to reach the next reassessment. Most laboratories retain records far longer than will be required by most accreditation bodies. 

3.4  Document Control

A key aspect of the operation of a quality system is that staff should be fully informed of their responsibilities and the way the various procedures, including the test and calibration procedures themselves, are to be operated. The main vehicle for passing on this information is the documentation described in Section 3.3.1 above. This documentation needs to be “controlled.”

A scenario all too often seen is that laboratories create an unnecessarily elaborate document control system which neither does the job required very well nor meets the laboratory’s needs. There are only three reasons why the document control system is needed. They are:

a.     To ensure that management is aware of, and has approved, all documents used by staff to guide them in their work.

b.     To ensure that all documents specifying procedures have been checked by someone with appropriate knowledge to ensure they are accurate, technically sound, and unambiguous.

c.      To ensure there is a record of the timing of all copies of document, so that if documents need to be reviewed, withdrawn or amended all copies can be subjected to the same procedure. 

The system established to achieve this needs to consider the following:

a.     The people reviewing documents before they are issued should be those with the relevant knowledge. This is not an issue of management hierarchy. For example, if the issue is whether a document is a correct and clear description of a bench procedure, the best person to review it might well be a technician who routinely does the work. 

b.     Documents may need to be issued and amended quickly and this should be done by the most appropriately qualified person. For example, the issue or amendment of a method is simply a matter for the laboratory technical management, who should be free to act. Passing such issue through the sort of committee structure sometimes seen for operating document control achieves nothing except a slow response.

c.      If document issues and revisions do have cross department implications and so need some discussions, the procedure for reaching agreement needs to be streamlined and made efficient and not excessively bureaucratic.

Remember that the purpose of the document control system is to allow appropriate and accurate documents to be issued, amended, and withdrawn. Far too often, systems are encountered which obstruct the issuing of documents.

3.4.1       Scope of the document control system 

The document control system must be described in the Quality Manual. This system must cover all documents used in the operation of the quality system. These include obvious items, such as the quality manual, associated procedural documentation, technical methods documentation, and work instructions. Less obvious items which must be included in the document control system are masters of pro–formas used for record keeping, textbooks, posters, notices, calibration tables, memoranda, drawings, and plans. Indeed, any document which provides information or instructions for use in technical or management procedures must be controlled. This includes items posted on notice boards and memoranda, for example of instrument settings, which may be pinned to the laboratory wall.

There is no reason why you should not still have that list of certified values for your DTA – Differential Thermal Analysis standards on the wall, but it must now have a signature and date showing it is approved by management and has been checked as correct.

Controlled documents material published material and other externally produced information, such as instrument manuals. In– house–produced material is frequently issued to individuals, whereas other documents are typically made available by placing them in defined locations in the laboratory. There is, however, no reason why in–house material cannot also be made available by being issued by storage locations. The onus will be on the laboratory to show assessors that documentation is controlled and available and that staff know where to find it.

The term “document” should be interpreted with the broadest meaning as covering information in all forms, including computer files, software and other electronic or digital information. It is an increasingly common practice for laboratories to make their documentation available on a computer network. This can make control much easier and updating a very simple exercise. Obviously, such files must be read–only for users and only capable of being amended by authorized persons. They should also be prevented from being printed without authorization and recording, since this will generate unrecorded copies of the document, which will be missed by the updating process.

On a related point, the Quality Manual must include an instruction that controlled documents, other than masters of pro–formas (see Section 3.4.4), must not be photocopied by staff. The enforcement of this is essential to maintain the integrity of the system since the document control system must know of every copy of a document in circulation to ensure that all are reviewed and updated when necessary. Proliferation of unofficial photocopies will undermine the control system completely.

Some organizations seek to protect themselves from photocopying of controlled documents by using paper of a distinctive colour or by using paper with coloured logos or footer bands. A rule can then be made that any copies of controlled documents which lack the distinctive colour are not to be used by staff and must be destroyed if discovered. It is also possible to obtain paper which prevents photocopying, but this is expensive. These expedients do provide protection, but they are not without cost and should not be necessary in a well-disciplined and audited organization.

Allowance should be made in the quality manual for the issuing of uncontrolled copies of controlled documents but only outside the organization. This covers the need, which sometimes arises, to provide customers with copies of, for example, quality manuals. There is normally no necessity to provide regular updates to such issues and therefore no need to commit to this in the quality manual. Documents issued in this way should be clearly marked as uncontrolled, for example by a suitable stamp, and the quality manual should include instructions that such uncontrolled copies are not to be released within the laboratory and that, if they are encountered by staff, they are not to be used as work instructions.

3.4.2       Practicalities of controlled documents and their organization

The issuing and amendment of each controlled document must be assigned as a responsibility to an individual or specified group of individuals. No other person may make alterations to the document or authorize its issue. As already emphasized, the assigned individuals should be those with the relevant knowledge to evaluate the document, irrespective of line of management.

Each hard copy of a controlled document should carry a cover sheet which shows the following:

a.     A clear indication that the document is a controlled document. 

b.     A version number and/or the date of the current version so that the most recent version can be clearly distinguished. 

c.      An individual identifier of the copy of the document, such as a copy number, the date of issue of the copy and either the name of the person to whom the copy was issued or the storage location for the copy.

d.     The name, position, and signature of the person(s) on whose authority the document is issued.

e.     An expiry of review date for the document.

Any controlled document should also provide information which enables a user to check that it is complete. Either it should be paginated in the form “Page n of nn” or the number of pages should be shown on the cover sheet. It is a good idea, when paginating a large, controlled document such as a Quality Manual, to paginate it in separate sections rather than continuously throughout. This avoids the necessity to replace the entire document if a page must be added. Under these circumstances, each page should show the section number, the page number and the total number of pages in the section, and there should be a page of contents listing the sections.

The operation of the controlled document system is normally one of the responsibilities of the Quality Manager, although the actual day–to–day administration of document control and record keeping may be delegated to administrative staff, for example in a library or document control centre.

In the case of each controlled document, there must be a record which shows either the name of the holder or the storage location of each numbered copy. When an amendment is authorized by the appropriate person, the Quality Manager can then ensure that every copy of the document is amended.

The simplest way to achieve this is to collect all the copies of the document and to amend and reissue them. In larger organization this many be practicable, and an alternative arrangement is to issue the amended pages with instructions for the action to be taken by the document holders. The pages should be accompanied by an amendment sheet which describes exactly which pages are to be discarded and which are to be inserted. This sheet should be placed in the copy of the documents so that there is a full record of amendments.

ISO 17025 contains a specific requirement that there be a master list or equivalent procedures so that staff can check that they have the current version of a document. This provides an additional safeguard but, provided the document control system is working, there should be no obsolete versions around.

ISO 17025 contains a specific requirement that there be a master list or equivalent procedure so that staff can check that they have the current version of a document. This provides an additional safeguard but, provided the document control system is working, there should be no obsolete versions around.

ISO 17025 requires that, “where applicable”, new or altered text be identified in amended or revised documents or in attachments to the documents. The idea is that attention should be drawn to the specifics of any changes so that staff can identify the key point and determine easily whether changes in procedures are required or whatever the changes are simply corrections of textual significance only, for example removal of typographical errors. In the case of in–house documents, this is easily achieved with moder word processing software which usually permits the use of distinctive typefaces, and indeed colours to highlight amendments. In the case of published material not generated in–house, it will normally be necessary to provide an attachment to the document when issuing it, pointing out the areas where changes have occurred. It is emphasized that the main point should be to make it clear to staff when the amendment will require them to change the way they carry out a procedure and when it is simply a textual change.

Amendment to controlled documents can be done by hand provided that the amendment is authorized by the person responsible for the document. Normally, this will require their initials and a date on the amendment. Hand amendment is only practicable in small organizations where only one or two copies must be amended. ISO 17025 contains a requirement that hand amended documents shall be formally reissued as soon as practicable. In practice, it is preferable to avoid hand amendment since it tends to lead to a sloppy approach to document control and almost always results in variations creeping in between different copies of a document.

The Quality Manager should hold copies of all versions of each controlled document so that, if necessary, the content at any point in its history can be determined. These copies can be in the form of computer files, provided they are properly protected by backing up.

3.4.3       Keeping documents up to date

There should be a procedure to ensure that controlled documents are reviewed from time to time. There is no set interval for this in ISO 17025, but an annual review is typical. All copies of a controlled document should show a review date so that users can immediately see whether the document is overdue for review. The review requirements are met if the person responsible for the document makes a record that the review has been done and either authorizes amendment to the document or records that the version as issued is confirmed as still relevant and requires no revision.

Some published documents, for example ISO or national standards describing technical methods, are subject to revision by the issuing body. The laboratory will need to assure the assessors that there is a mechanism for ensuring that such revisions are noted, and the laboratory’s copies of the documents have been replaced with the update versions. The simplest way is to have a list of all the documents in this category and to check with a subscribing library or with the issuing body on a six-monthly basis and, of course, to record the checks.

This system of ensuring documents is updated should encompass documents issued by the accreditation body.

It may sometimes be necessary to retain copies of older versions of documents, for example standards describing methods, for instance if clients wish the previous version to be used. The requirement to update the documents does not preclude this but care must be taken to ensure that the appropriate method is used for each client. Documents which are obsolete for general use, but which are retained for specific purposes must be suitably marked. The marking should either specify the scope of use of the document or simply warn that it is not for general use and refer the reader to an authority, for example the Quality Manager, who can provide information on when it is to be used.

3.4.4       Some issues specific to pro–formas

For the first time in laboratory quality management standards, ISO 17025 introduced the requirements to control pro–formas used for record keeping. To be strictly correct, what is being controlled is the format of the pro–forma.

The best way to meet this requirement is to have a master set of pro– formas and to insist that any copies made for use are always made from the masters and not from copies of the masters. The masters may be hard copies or, more usually these days, a set of computer files. There may be more than one set of masters but, as with all controlled documents, each set must be numbered, and records kept of holders or location.

This system then ensures that, if a pro–forma is amended, the amendment comes into use immediately. Staff must be weaned away from the common habit of keeping a stock of forms in a drawer or, which is worse, keeping a set of their own “masters” from which, they make copies as needed.

The masters, and hence any copies of pro–formas, must show a version number/date of issue, the identity of the person authorizing the issue, and a date when review of the pro–forma is due.


4.  Monitoring and Maintenance of the Quality System

4.1  Key Questions

a.     Do you have a mechanism for checking that your arrangement for quality management are being operated on a day–to–day basis by all staff?

b.     Where this is found not be the case, do you have a mechanism for taking corrective action to ensure that the situation is remedied and not likely to recur?

c.      Do you use the information from any quality problem to enable you to identify where the quality system can be improved, and do you act on this?

d.     Do you have mechanisms in place to scrutinize the quality system for areas where improvements might be possible?

e.     Do you review the performance of your quality system to determine whether it is delivering the objectives which you have identified for it?

4.2  Reasons for Monitoring Quality System

A laboratory must take active steps to check that its quality system is being operated properly and that it is achieving the required standard of quality. Quality control provides some feedback on these issues, but this is not sufficient to meet the requirements of ISO 17025. The system must be proactive and provide assurance of quality. Moreover, the quality system itself must be under constant scrutiny.

ISO 17025 requires audit and review of the system on a planned and regular basis, plus ongoing monitoring to detect quality problems and even to anticipate and prevent problems. These are all strategies designed to detect actual or potential nonconformances with the quality management system before they impact on data quality. Where there are no problems, the activities provide a record that the quality management system has been scrutinized and found to be satisfactory.

4.3  Quality Audit and Review

The organization of audits and reviews is the responsibility of the Quality Manager, although he or she will normally involve other staff in actually carrying out audits. The Quality Manager is also responsible for checking that any corrective action agreed is adequate, is carried out and is effective.

The frequency of audit and review of the systems is not mandated in ISO 17025, but a note in the standard expects that each aspect of the system will be audited at least annually and that reviews will, likewise, be conducted at least annually.

4.4  Distinction between Audit and Management Review

An audit is concerned with checking that the quality system in the quality manual and associated documentation has, in fact, been implemented and being operated. To put it simply, the documentation describes what is supposed to be done while the audits checks that it is being done and, in the way, specified.

Review is a management function where the key members of the laboratory management examine the performance of the quality system. The object is to decide whether the system is delivering that is required. The requirements will be, as a minimum, compliance with ISO 17025 but may also include any local policy requirements thought relevant by management.

The review and audit process are distinct but interact in the sense that the review will consider, amongst other things, the audit reports. These will provide important information about where the quality system is weak and in need of revision. In this respect, review includes an element of preventive action – see Section 4.9.

4.5  Planning the Audit and Review Programme

This is the responsibility of the Quality Manager, but the plan should be considered and approved at the quality system review meeting. See section 4.7 below.

An annual plan should be prepared, and the proposed timing of audits should be marked on the plan and, when an audit is complete, the actual date and name of the auditor should be added.

The Quality Manager should be given complete authority to ensure that the plan is adhered to. The audits and reviews should be treated as an important issue and not put off any reason whatsoever. Experience shoes that, once a programme falls behind, it is difficult to catch up.

Audits are a crucial issue in an ISO 17025 quality system since the whole philosophy is that the laboratory designs and implements the quality management system and then carries out audits to ensure that it is working properly. In the absence of audits, problems will only be detected when they lead to quality failures. This is a quality control approach aimed at error detection; an ISO 17025 compliant quality management system is focused on error prevention through quality assurance, and it is only through the audit that the laboratory can be sure that the system is working.

Nothing will make assessors more uneasy than the impression that a laboratory does not take its audits seriously. A pattern of audits being carried out late relative to the audit plant will send the wrong message to assessors, and statements such as, “We had to put off October’s audits until September because we had so much work in” are likely to be interpreted by assessors as demonstrating low priority approach to the quality system.

4.5.1       Frequency of Audit and Review 

General practice is that an audit programme should be organized on a rolling basis such that, in any one year, each aspect of the quality system will be covered at least once. There should also be at least two vertical audits in any one year. In a vertical audit the auditor, rather than examining one aspect of the system, tracks a specific sample or samples through the laboratory, from receipt to reporting of results, and checks that the documented procedures have been followed and all records kept.

There is nothing in ISO 17025 which precludes an annual, one–off, audit of the whole system, but this is not regarded as good practice except perhaps in very small organizations where external auditors must be used. The rolling system described, where audit is seen as an ongoing part of management, is generally to be preferred.

Annual management reviews are generally enforced by accreditation bodies, and it is unlikely that this will change.

These timescales should be used on a routine basis, but it is strongly recommended that the frequency be doubled during the first year of operation of a new quality system, i.e., a six–month rolling audit programme supported by a six–monthly review.

The Audit Programme should be phased in during the implementation of the various elements of the quality system and not postponed until it is all in place. This provides a valuable check on the elements of the quality system as they are established. A settling in period of two to three months should be allowed for each part of the system and then that part audited.

An efficient and responsive audit system is a powerful too in establishing the quality system. Inevitably, people will tend to forget to do things required by the quality system from time to time but, if they learn that any omissions will be picked up quickly and corrections asked for, they will rapidly acquire good habits and make fewer mistakes. 

4.6  Quality Auditing

4.6.1       Choice of Auditors 

The responsibility for audits lies with the Quality Manager, who may carry out audits directly but will normally delegate at least some of them to suitable individuals.

Auditors should be familiar with the principles of auditing and must be independent in the sense that no one may audit an area of activity in which they are directly involved or for which they have immediate supervisory responsibility. ISO 10011, parts 1 to 3, provides useful guidance on auditor’s responsibilities and the organization of audit programmes.

ISO 17025 does imply that, when resources are not adequate, the independence requirement of auditors can be relaxed, but experience suggests that accreditation bodies will take some persuading on this issue.

In most laboratories it is usually possible to do all auditing with internal staff, but there is no reason why outside persons cannot be used. People from other parts of the organization to which laboratory belongs are often suitable.

A typical situation in a small laboratory is that the Quality Manager carries out most audits, but some other senior staff member is brough in to audit those areas where the Quality Manager has personal involvement.

An auditor need not have a detailed knowledge of the technical aspects of the work of the laboratory, but some background is essential.

The Quality Manager is responsible for ensuring that auditors are properly trained and should maintain training records for them. These records can form part of the overall training records – see Section 5.3. Training can be given in–house or by outside agencies, such as consultants, can be used. A commonly applied strategy is for the Quality Manager to undergo formal training in auditing and then for he or she to train the internal auditors.

It is sometimes appropriate, especially with very small laboratories, to use external auditors to achieve independence. These will normally need to be approved by the accreditation body, but there is usually little problem with this since the accreditation body is happy to see steps towards effective and independent auditing.

4.6.2       Conducting an Audit 

It needs to be clearly understood that a quality audit is solely concerned with checking that what is actually happening matches the documentation. Before starting an audit, therefore, it is essential to agree on the documentation to be audited against. Normally, this will be a part of the quality manual with, perhaps, some associated documentation.

The Auditor should begin by looking at the documentation and should plan exactly what is going to be examined during the audit. The best approach is to prepare a checklist of questions to be answered, documents to be examined and inspections to be made.

The checklist should be used as the basis of the audit and should be covered comprehensively. Any questions which the audit raises, and which require further investigations should be addressed only after the checklist has been recovered. Too many diversions are likely to result in the audit not being completed or being very protracted.

The way an audit is approached is important. It should be seen by the laboratory as an opportunity to reveal any potential problems before they lead to a quality failure and not as an inquisition to be borne. The auditor should be seen as a valuable helper in ensuring that quality is maintained.

This atmosphere is best achieved if the auditor always remembers that his or her job is to audit the quality system and not the people. Avoid any actions or statements which might the taken to suggest that blame is being apportioned. If human error has occurred, the approach must be to seek to modify the quality system to make this less likely in the future.

The Audit should be a forward–looking process, oriented to ensuring that any problems do not recur. Past problems should only be analyzed for the information which they provide and not as part of a witch hunt to find a culprit.

Always remember that auditing is as much about confirming the satisfactory operation of the system as it is about finding problem areas. This means that Auditors must carefully record details of the systems, records and items examined, especially if the report finds no evidence of non–conformance.

4.6.3       Audit Reports

The auditor must make a written report on every audit. Other documents should be appended as necessary. It is customary and helpful for a copy of the auditor’s checklist and notes to be attached to the report.

Some audit report formats make provision for auditors to obtain a confirmatory signature from laboratory staff for their observations. The idea is to confirm factual observations so that these do not become a matter for later debate. The point needs to be made to the staff confirming the observations that they are only agreeing to the facts and not accepting that a non–conformance has occurred. It is only at the second stage of the process of audit that the auditor will decide whether an observation is to be reported as a non–conformance. The practice of having observations confirmed is not an essential component of auditing and whether it is adopted is a matter for the individual organization and will be very much determined by the prevailing culture. It does, however, reduce the possibility of contentious disputes since at least the facts of what was observed will not be at issue.

The audit report must detail exactly what was examined during the audit and the findings of each examination. Positive reports are required as well as negative since to stress the point again, the audit is as much concerned with establishing that the system is working properly as it is with finding problems.

The auditor must be able to provide objective evidence for any conclusion reported. A general impression that all is not well in an area is not an adequate basis for a report. When auditing, be always objective and be able to prove your point without the need to debate the question.

Some systems incorporate the concept of non–conformance, for example major, minor, etc. This is not recommended since every non– conformance requires addressing with corrective action. A debate on whether it is major, or minor is not relevant and merely time consuming.

The format of an auditor’s report on a non–conformance must be objective and clear. In essence it should state “In Section 123 of your quality manual there is a requirement to operate in the following manner. Here is an example which I found where this is not being done.”

At the end of an audit, the auditor should hold an exit meeting with interested parties. This will normally be the laboratory management plus any other staff with supervisory responsibility in the area audited. The auditor should give a verbal report on the findings, both positive and negative.

At this meeting there should be agreement on any corrective action required and this should be recorded. The record should show what action was agreed, the person responsible for carrying it out and the timescale. This record can be part of the audit form, but many organizations prefer to have a separate corrective action record since this can serve generally. A need for corrective action can arise from causes other than audit findings, for example client complaints or detection of non–conforming work.

In determining the corrective action, the objective should be to eliminate the root cause of the non–conformance and to reduce the likelihood of a recurrence. To emphasize this, it is recommended that audit report forms include a section where the root cause must be entered. This aspect is strongly emphasized in ISO 17025 but, in practice, assessors have always required such an approach. A quick fix for a problem with no attempt to eliminate the fundamental weakness in the system will not meet with approval.

The Quality Manager has overall responsibility for checking that corrective action is taken and for making a record of the completion. Follow–up arrangements should also be made to ensure that the action has been effective. These arrangements should also be made by the Quality Manager. In the case of many non–conformances, the follow up should include a partial re–audit, at least, to be scheduled after an appropriate lapse in time so that the working of the modified system can be tested. Typically, one to two months should be considered. The plans for follow up and the confirmation of its successful completion need to be recorded as part of the corrective action.

The completed audit reports should be filed by the Quality Manager and will need to be available to assessors. It should be appreciated that assessors’ prime interest in the audit reports will be to check that corrective actions has been taken and followed through and not to use them as a means of detecting the laboratory’s weakness.

4.6.4       Audit Summaries

It is good practice for the Quality Manager to summarize the results of audits. His summary should collate the numbers and types of non–conformances, classified by the area of quality system to which they pertain. The Summary should be presented at the management review meeting.

The value of the summary is that it will highlight the areas of the quality system which fail most frequently and so help to focus the review. The summary can be particularly useful in multi–department laboratories where it brings all the audit findings together and can help to identify quality problems that are common to several departments. Such problems are often most effectively addressed at the higher management level rather than by individual departments.

4.6.5       Additional audits

In addition to the planned audit programme there may be a need for audits in response to any occurrences which indicate non–conformances with the quality system or with ISO 17025. These would include internally reported quality problems, including detection of non–conforming work, client complaints and actual quality failures which come to light in any way whatsoever.

The audit should be organized to address all the relevant parts of the quality system. The follow up is the same as for planned audits, with appropriate corrective action being taken and its completion and effectiveness verified.

4.7  Quality System Management Review

4.7.1       Composition of the Review Committee 

This should consist of, as minimum, the Quality Manager and the Laboratory Manager with someone present who represents the most senior level of management of the laboratory where decisions on allocation of resources are made. This could be a representative from the board of the company or the equivalent policy making body. The presence of such a representative is key since there may resource implications in the findings of the committee. Any other persons who might contribute should be present. These would typically include senior professional staff and, perhaps, chief technicians. In a small laboratory with only two or three staff, it is usually most effective to involve everyone in the review. 

4.7.2       Administration 

The Quality Manager is responsible for arranging the meeting and for compiling and distributing the agenda. He or she should also ensure that all relevant documents are available. These should include but not be limited to:

a.     All audit reports and summaries, including reports by external assessors and any but customers.

b.     Feedback from customers.

c.      The proposed audit and review programme for the following year;

d.     Details of in–house quality control checks.

e.     Reports on quality failures and follow up action.

f.      Reports on customer complaints and follow up action.

g.     Preventive action records;

h.     Results from participation in Proficiency Tests and other interlaboratory trials

The meeting must be minuted and a list of action points prepared. The Quality Manager is responsible for ensuring that all the actions agreed at the review meeting are carried out, but the meeting should agree on who will carry out each action and the timescale. This should include the updating of any documentation.

4.7.3       Meeting Agenda

The purpose of the review meeting is to look at the performance of the quality system over the past year and to decide on any modification needed to secure improvements.

The meeting should also consider any modification needed to meet any changes to the quality management standard to which the laboratory adheres and any changes needed to address new policy objectives set by the organization.

The meeting should cover, at least, the following topics:

a.     Quality matters arising from the last review meeting and a report from the manager confirming that all actions have been taken;

b.     Report on any surveillance or assessments by the accreditation body.

c.      Discussion of the results of all audits, both internal and external.

d.     Review of the quality manual and decisions on any changes required;

e.     Performance in Proficiency Tests and any similar interlaboratory exercises; plans for future participation in such exercises;

f.      In–house quality control checks.

g.     Complaints from customers and follow up action.

h.     Results from customer surveys and plans for future surveys.

i.       Quality incidents and follow up action.

j.       Review of staff training and plans for the following year.

k.     Adequacy of staff, equipment, and other resources to maintain quality.

l.       Plans for staffing, equipment, premises, etc.

m.   Agreement on action points and date of next meeting

4.8  Corrective action

Corrective action will be required whenever a quality problem is identified. The audit is an obvious mechanism for determining whether corrective action is necessary, but there is other potential source of information, and all of these should be used. Obvious sources are complaints from clients, information passed on from laboratory staff about quality problems, detection of non–conforming work and direct detection of quality failures because of quality control monitoring. Interlaboratory Proficiency testing and feedback from external assessors and auditors would also come into the category of useful sources from outside the organization.

4.8.1       Records 

Because of the variety of sources giving occasion for corrective action, it is useful to separate the record system for reporting quality problems from the system for planning and recording corrective action. This allows the same corrective action management and recording system to serve for all sources of information on quality problems. The audit, for example, is reported on a form dedicated to that purpose, and this is cross referenced to the corrective action requests.

The corrective action record system should provide for the recording of the reason for the action and for a detailed description of the proposed corrective action, with an explanation of how it addresses the root cause of the problem. The responsibility for the action should be assigned and a timescale agreed. There should a record of the arrangements proposed to verify the effectiveness of the corrective action. This will normally mean some type of audit, possibly of restricted scope, covering only the immediate are of the quality system involved in the action.

The Quality Manager should be required to confirm when the action is complete and should also be responsible for ensuring that the verification of effectiveness is carried out and recorded. 

4.9  Preventive action and improvement

ISO 17025, as compared to previous laboratory accreditation standards, introduces a requirement for preventive action. This is sometimes confused with corrective action but although there are grey areas, the two concepts are essentially different.

a.     Corrective action is a response to a finding of a non–conformance or a quality failure, i.e., what you do to put right something that has gown wrong and, where possible, to make sure it does not go wrong again.

b.     Preventive action, on the other hand, is occasioned when circumstances are identified where a quality failure or non–conformance is a possibility or where an opportunity is identified to strengthen the quality system. In this respect, preventive action is a total quality management concept and contains elements of quality improvement.

There are two distinct streams of preventive action:

a.     Actions in response to a scrutiny of the quality system which identifies areas where the system could be strengthened; such actions are, in a sense, voluntary for management since the response can consider the degree of risk as opposed to the benefit. This recognizes that any quality system can always be improved, but there will be associated costs, some direct and some indirect, for example reduced efficiency. If management decides not to take an opportunity for preventive action on such a basis, justification needs to be recorded.

b.     Actions in response to identified trends showing deterioration in performance; these include trends in data but also encompass general performance indicators, for example turnaround times. There is sometimes debate about whether this is preventive or corrective action. this not fruitful. Action is needed; the attached label is irrelevant. The point is that the laboratory is using a mechanism to pick up incipient quality failures before they impact.

The essential requirement of ISO 17025, as interpreted by most accreditation bodies, is that the laboratory must have mechanisms for identifying when preventive action is opportune, in the sense of opportunities for improving the quality system and responding to unsatisfactory trends in performance. A typical set of approaches is as follows:

a.     The brief of auditors should be extended to invite suggestions for where improvements to quality management might be achieved within the area which they are auditing. This activity, it should be stressed, is distinct from the audit and should be reported separately.

b.     Staff in general should be encouraged to offer suggestions for improvements in quality management or where quality can be made more secure. This can be via an anonymous suggestions system if it suits the culture of the organization to proceed this way.

c.      There should be regular formal scrutiny of trends in data, especially quality control data, by the laboratory management. This should include interlaboratory Proficiency Testing results. The objective should be to identify trends which indicate potential failures, for example bias developing on Shewhart Chart. This kind of scrutiny can be achieved by a regular meeting of senior laboratory personnel, for example the Laboratory Manager and Senior Scientists. The frequency of the meeting will depend on the volume of works, but monthly meetings are commonly held.

4.9.1       Records 

There should be a Preventive Action record form. This should carry the suggestion for preventive action followed by an evaluation by management and an explanation if the action is thought inappropriate. It should be noted, however, that the 2005 version of ISO 17025 has a specific requirement for the laboratory to “continually improve” its quality system, so arguments against the implementation of preventive action in response to identified weaknesses are likely to be difficult to sustain in future.

If action is to be taken, the proposed action should be detailed, responsibility allocated, and a date for completion set.

The Quality Manager should receive a copy of the form and should be responsible for following up to ensure that the proposed action is completed and to record this.

4.10                  Client complaints, quality incidents and other feedback

4.10.1    Complaints 

This is always a somewhat delicate subject, but the reality is that no laboratory goes very long without some queries or complaints from its clients. To comply with ISO 17025, the laboratory is required to record all client complaints to investigate them systematically and to record the result of the investigations and any corrective action taken. 

This is not intended to create a body of evidence to be used against the laboratory but is rather a recognition of the fact that such complaints provide a valuable source of feedback on the operation of the quality system. After all, the main object of the quality system is to ensure that clients are properly served.

4.10.2    Learning from client complaints

The laboratory should approach client complaints as a source of information and they should be investigated in the same manner as any other quality incident. If there is no substance in the complaint, this can be recorded, with supporting evidence. If the complaint has substance, then the laboratory should be able to provide a record which shows the corrective action taken to rectify the problem and, most importantly, what has been to reduce the likelihood of a recurrence. The investigation will often require an audit of some part of the quality system.

The investigation of such complaints is the responsibility of the Quality Manager although, in most instances, active participation of the laboratory management in investigating the complaint will be required.

4.10.3    Administration 

The laboratory quality documentation should state clearly which staff members are allowed to respond to complaints. This will normally be the Laboratory Manager and, perhaps, other senior staff members.

The person responding must record the complaint and the details of any response, apply a record of any corrective action already taken or intended. The record must then be passed to the Quality Manager, who should evaluate the action already taken.

The Quality Manager should decide whether the action taken is adequate or whether further investigation and corrective action is needed. The Quality Manger must decide whether an audit is required. Overall, it is the responsibility of the Quality Manager to ensure that the complaint has been properly dealt with, that corrective action has addressed to root cause and that any lessons learnt have been incorporated into the quality system. He or she is also responsible for ensuring that the record of any corrective action is made – see Section 4.8. The Quality Manager should also follow–up to establish that the action has been effective.

4.10.4    Other client feedback

Proactive soliciting of client feedback is now required by ISO 17025 since the 2005 edition. Both positive and negative feedback are required. At the time of writing, it is not clear what mechanisms assessors will expect to see for acquiring such feedback, but the kind of thing generally expected is annual client surveys or feedback forms sent out with test results. In practice, such surveys will cover aspects of the laboratory’s client services that lie outside the technical area, since experience suggests that it is more than likely that client feedback will address issues other than data quality, for example turnaround times and sample collection. 

4.11                   Quality incidents, control of non–conforming work

Any quality incident should be investigated and used as a source of information on weak points in the quality system. It is recommended that laboratories have a form for recording quality incidents, i.e., situations where quality comes into question or breaks down. These forms can be used to record complaints, internally detected quality anomalies, detection of non–conforming work and other quality failures. The Quality Manager can then process these forms and determine whether further action, for example audit or corrective action, is required. 

The form should provide for reporting the incident and should identify how it came to light, for example complaint, detection of non– conforming work, etc. There should then be a section for a report on any action already taken. An incident will normally require corrective action, so there should be a section to cross reference any corrective action requests as described in Section 4.8. If no corrective action is proposed, then the reason why it is considered unnecessary should be recorded.

The report should be passed to the Quality Manager, who must review it to determine whether the response if satisfactory. If necessary, the Quality Manager should raise a further request for corrective action.

4.11.1    Control of non–conforming work

Detection of non–conforming work is a particular example of a quality problem which is given special emphasis in ISO 17025. Non–conforming work is any work which does not meet the laboratories’ stated standards, either with respect to mode of execution or outcome, for example quality of data.

There needs to be guidance in the quality manual on how such problems are handled. The system needs to address the following issues.

a.     When non–conforming work is detected, the work must stop and management be informed.

b.     There must be an investigation and report on the quality incident.

c.      Corrective action must be taken and recorded.

d.     When corrective action has been affected, only then may the work be repeated or resumed.

e.     The system must make clear who is responsible for determining that the problem has been resolved and that work may be started again. This will typically be the Laboratory Manager or Quality Manager.

As with all such systems in ISO 17025, the emphasis in on learning from experience and strengthening the position for the future.


5.  Personnel

5.1  Key Questions 

·       Do you have a record of the qualifications and experience of your staff, with objective evidence of their qualifications, for example copies of certificates? 

·       Do you have a clear record about your proposed scope of accreditation of which members of staff are authorized to conduct each test or calibration?

·       Do you have a documented procedure for training staff in quality issues and technical procedures, including tests?

·       Do you have a documented procedures for conducting evaluation of the competence of staff after training and before authorizing them for the procedure in which they are trained?

·       Do you have a system for recording training, including objective evidence of competence?

·       Do you have a mechanism for identifying which staff conducted each procedure, or test or calibration?

5.2  Staff Records

It is necessary for the laboratory to have staff training and competence records. These should include the following elements, at least.

a.     A record of each staff member’s formal qualifications, previous experience and date of recruitment; all qualifications should be verified by inspection of certificates or equivalent evidence. This can be done by the personnel department or the laboratory management, but the record should include declaration, signed by an appropriate person, of the evidence seen. This could be a Personnel Manager, the Laboratory Manager, or the Quality Manager.

b.     A list of the activities for which the staff member has been trained; this should include not only tests and calibrations for customers but any in– house calibrations and other activities, such as quality auditing and administrative activities, for example receiving samples. Sections 5.2.1 to 5.2.3 and section 5.3 below deal in detail with the content and format of the training record.

c.      A record of regular re–assessment of the staff member’s competence at each of the listed activities. This is dealt with in detail in Section 5.4.

The training record should also provide for the recording of any outside courses attended by the staff member and for personal certifications or memberships of relevant professional bodies. Since the last two are often renewable by annual subscription, there should also be provision for recording checks on this renewal by the laboratory management.

One of the purpose of the training records is to provide a source of reference by supervisors who wish to ensure that the person whom they intend to allocate to a task is properly trained and that their training is up to date. They also form an essential of the audit trail since, as well shall see later, all raw data must be traceable to the person who generated it. The training records complete the loop by enabling a check to be made that the person doing the work was adequately trained and checked as competent.

All entries in training records should be made and initialed by the Laboratory Manager or Quality Manager and, in the case of authorizations to carry out tests or calibrations, should be countersigned by the employee. By doing the employee acknowledges that he or she appreciates the extent and limitations of their authorizations.

It should be emphasized that, since the training records need to be accessible to supervisory staff, internal quality auditors and assessors, it is extremely unlikely that normal personnel records, even if they include all the necessary training details, will be suitable for use in this context. Normal personnel department records are likely to contain sensitive personal information unsuitable for such wide scrutiny.

The training records must be available in the laboratory. Assessors will not accept training records held in the personnel office at Head Office!

5.2.1       Initiation of the Staff Records for New Employees

The laboratory management will need to initiate a staff record and institute a training programme for each new employee. Obviously, the content will depend on the employee’s previous experience and known level of competence. However, irrespective of whether the new recruit is of the highest general competence, it will be necessary for him or her to be trained in the laboratory’s methods and procedures.

This is not to question an experienced new employee’s basic competence, but is principally designed to ensure that the employee is familiar with the quality system, the way tests and calibrations are done and how results are recorded in this particular laboratory, the object is to achieve a maximum of consistency between measurements made by different staff members. The new employee must have his or her competence assessed exactly as for a trainee and must be observed carrying out the procedures for which he or she is to be registered to ensure that the documented procedure is being followed – see Section 5.3 below.

In the case of a highly experienced new recruit, this assessment of competence may not need to be preceded by extensive training, but there must be a formal familiarization with the laboratory’s procedures and an opportunity to read the documentation.

5.2.2       Starting the staff records for existing employees

Most laboratories starting up staff records will have existing employees with known areas of competence. There is no need to create a retrospective record of training for such persons. The management should prepare an initial list of authorizations for existing staff and should note that they were regarded as competent at the start of the record. There is no need for assessment of competence for existing employees; they will be covered during the first re–assessment of competence – see Section 5.7.

5.2.3       Routine Operation of the Staff Records

The staff records will become a complete record of the training, promotion, assessment and re–assessment of competence for each employee.

Each record needs to show the dates over which the training was given, the identity of the trainer, the identity of the person assessing competence and the date when authorization to carry out the procedure unsupervised was given. The entries must be confirmed by the Laboratory Manager or Quality Manager and must be signed by the employee.

The areas of competence listed in the record may be defined in any way suitable to the laboratory’s operations, but for test/calibration methods there should be a direct correspondence between the accreditation scope and the training records. From the assessment body’s perspective, there are no grey areas. Either an employee is trained and competent to carry out a test/calibration in the scope or is not.

Remember that, especially for new employees, the training must include familiarization with the quality system and with basic laboratory arrangements, such as sample numbering, storage of samples and the mechanisms for recording data and reporting within the organization.

Where it is possible, objective evidence demonstrating the employee’s competence should be attached. This might be a set of data obtained by the employee on standards or reference samples, or perhaps data which has been cross checked by repetition of the test by a staff member already trained for the procedure. The person assessing competence should sign such data as acceptable and reference should be made to the source of authentication, for example a calibration certificate for a reference material or the data from repeat determinations.

5.3  Staff training and assessment of competence

Training of staff to carry out tests or calibrations must be an organized and formal process. The management must give the responsibility for carrying out the training to a specific person who is already authorized for the relevant test and who will act as Training Officer.

Staff undergoing training must not carry out work on client’s samples or items for calibration except under the direct supervision of the Training Officer. The Training Officer must take direct responsibility for the quality of the work and must countersign all worksheets and results in recognition of this responsibility.

When the Training Officer is satisfied that the employee is properly trained, the Laboratory Manager should be informed. The Laboratory Manager must now arrange to carry out a competence test on the employee, under direct observation by the Laboratory Manager. Ideally, the test or calibration should be on items for which the results are already established, for example references or items previously tested or calibrated.

To accept the employee as competent to be authorized for the test, the Laboratory Manager must be satisfied that the documented procedure is being followed, that results and all other relevant observations are being properly recorded and that the result being obtained are correct as judged by the known values and normal quality control checks operated by the laboratory. The general criterion for the acceptability of a staff member’s competence should be that they can confidently expected to follow the documented procedure and consistently produce data which falls within the laboratory’s known performance band.

The Laboratory Manager may delegate the competence assessment to another member of staff, but this should be an explicit process and the Laboratory Manager must confirm the delegee’s assessment of the data generated by the trainee. Ideally, assessment of competence should not be conducted by the Training Officer appointed for the training being assessed. However, where Specialist areas of expertise are concerned, this may be unavoidable as the Training Officer may be the only person with adequate knowledge to conduct the assessment.

In some quality systems the competence checks are assigned to the Quality Manager as a responsibility. This is perfectly permissible under the standard and has much to recommend it, but only if the Quality Manager has the appropriate technical expertise.

In addition to authorizations to carry out tests/calibrations, it may be necessary to have training and competence tests on instruments or routing operations. This will depend to some extent on the experience of the staff concerned and should be at the discretion of the management. For example, in the case of junior staff with little or no experience, it is entirely possible that they will require training in the use of basic equipment such as volumetric glassware, balances and thermometers.

The onus is on the laboratory to show that its staff are properly trained, and the management should approach all questions of authorization and competence testing with this in mind.

Once the management is satisfied that the trainee is competent, an appropriate entry must be made in the staff records; this should be signed by the Laboratory Manager or Quality Manager and, in the case of tests and calibrations, the employee. The entry must be supported by a brief report stating that the employee was observed carrying out the test and giving a list of the operations performed, for example weighing samples, colorimetric measurement, titration, etc. Copies of worksheets or notebook entries completed by the trainee and copies of any instrumental output should be attached. These should be dates and signed by the person conducting the competence assessment.

5.3.1       Multi–level Authorization 

It is becoming increasingly common for accreditation bodies to expect to see competence assessment being dealt with in a multi–tier system with equivalent multi–level authorization. In such systems the initial assessment of competence lead to an authorization to work unsupervised, but the work is then subjected to further checks and, usually, countersignature by a supervisor. Subsequently, a second competence assessment is conducted and, if this is satisfactory, the authorization is extended to a second level and the checks and countersignature dispensed with. 

5.4  Re–assessment of training and competence

Competence of staff must be regularly reviewed and, normally, re–assessed. The general practice is that the authorization of a member of staff to carry out a particular test or calibration should be reviewed at least annually. The key reason for the re–assessment is to maintain consistency between data from different workers, so, in disciplines where interpretation by staff is an important input, for example textile testing and some are of microbiology and histology, assessors may require more frequent re–assessment. Intervals of less than three months between formal re–assessments are unlikely to be required.

A useful strategy is to maintain a record which summarizes, for each employee, how often they perform a test and whether the data was acceptable or not under the laboratory’s normal quality control requirements. This record has the dual purpose of monitoring competence on a regular basis and checking that the staff member continues to have regular practice in the procedure – see later in this section for further discussion on this point. However, such a record must always be supplemented by a more formal re–assessment as described in the next paragraph. This record can easily be filled in when data is checked before release since the worksheets or other form of recording data must identify the person executing the test.

The formal re–assessment should take the same form as the competence test described in Section 5.3 above. That is, the employee should be observed carrying out the test to check that the documented procedure is being followed, and results should be scrutinized and ideally checked by a second person. Alternatively, reference samples can be used. A common strategy for re–assessment is to have the employee carry out one of the determinations which for part of the laboratory’s interlaboratory Proficiency Test or measurement audit programme.

The re–assessment of each employee’s authorization should be planned at the beginning of each year by the management. The re–assessment should be supported by documentary evidence in the same form as described for the competence assessment in Section 5.3 above, i.e., a report by the person observing and copies of results and instrument output.

If the review is unsatisfactory, the management must withdraw the authorization pending retraining of the employee and performance of a satisfactory competence test.

A laboratory management should always be aware that an employee, although formally trained for a test, might not perform it for a period due to balance of work. Under these circumstances, and depending on the complexity of the operation, it might appropriate to take the view that the employee’s authorization should be withdrawn pending a refresher course of training and re–assessment of competence. It is impossible to set hard and fast rules for determining when competence should be regarded as having lapsed; the complexity of the procedure is clearly an important factor. As a rule of thumb, however, any test not performed over the past year would be a reasonable candidate for refresher training.

5.5  Training Policy and review of training needs

ISO 17025 contains an explicit requirements that the laboratory have policy and procedures for identifying training needs and providing training of personnel. This requirement is, essentially, focused on identifying the needs of the laboratory rather than on the professional development of individual staff. The annual review of the quality system (see Section 4.7) would prove one suitable forum for identifying training needs. Such needs generally arise at one to two levels. Firstly, there is the question of whether existing staff need to be trained to increase their versatility within the current skill base to enhance the laboratory’s flexibility and ability to cope with the workload for each test or calibration. Secondly, there is a necessity to consider training needs for any planned expansion of the laboratory’s capabilities.

A suitable policy is to review the overall picture as part of the annual quality review, but this should be supplemented by specific reviews at regular intervals by the Laboratory Manager and the Quality Manager. In addition to this routine review, whenever there are staff changes, whether due to resignations or new recruitment, a review of the implications of these changes and any resulting training requirements should be made by the management.

The other occasion when the management should review training requirements is when the laboratory is introducing new method or instrumentation. Such changes will, almost inevitably have implications for training or for, possibly, recruitment of new staff with appropriate skills not currently available in the laboratory.

These reviews must be recorded. The record should show the reason for the review, identification of the participants, the date and a summary of the conclusions reached, including any proposed action.

Obviously, training needs may become apparent ad hoc, for example the laboratory may have an unexpectedly high demand for a particular test and will need to train extra staff to meet the need. The training requirements review is not intended to replace such normal management response but rather to introduce a pro–active aspect into dealing with training requirements.

5.6  Action when employees leave.

When an employee leaves, the appropriate entry must be made in the staff record. The records for the ex–employee must be retained since they constitute a part of the laboratory quality record and may need to be referred to if data is called into question. See section 3.3.2 for details of appropriate retention periods for records.

5.7  Other entries in the staff records

The information discussed so far constitutes the minimum requirements of staff records. The management should feel free to insert any other information on members of staff which is relevant to their competence. This can include copies of reports on their progress in the organization and records of promotion or experience gained on projects.

Such entries may be of any format but should be signed and dated by the Laboratory Manager or Quality Manager and inserted in chronological order.

In some areas, for example, non–destructive testing, staff need to hold personal certifications which may be subject to re–assessment or renewal by subscription. Where this is the case, the training record must include details which show that the certification is current, and the laboratory must have a mechanism to check that the renewals or re–assessments are carried out as required and are recorded.


6.  Accommodation and Environmental conditions

6.1  Key Questions

·       Have you considered whether there are any environmental factors in your laboratory which might impact on the validity of test or calibrations?

·       Are you conducting any tests or calibrations where the published procedure which you claim to follow includes a requirement for the work to be done under specific environmental conditions, for example temperature or humidity?

·       Do you have any activities which need to be separated to avoid, for example, cross contamination?

·       Do you have procedures in place which will create an objective, auditable record that environmental conditions which might affect tests or calibrations are controlled and monitored?

·       Do you have clear instructions to staff on actions to be taken when conditions move out of specifications?

6.2  Some specific considerations

6.2.1       Chemical Testing

In the case of chemical analysis, the cross contamination between samples and possible environmental contamination of samples are likely to be overriding considerations. Another concern is that chemical testing requires standards often comprised of pure samples or concentrated solutions of materials which are being tested for at trace levels.

It is common to monitor the temperature of chemistry laboratories, although this is rarely necessary. It can be argued that volumetric measurements are affected by temperature but, in practices, the variations in modern borosilicate volumetric glassware over a temperature range in which staff will be comfortable is not significant relative to other sources of uncertainty of measurement.

The only reason for monitoring the temperature in your laboratory is if data will be affected if a specific range of temperature is not observed. You must then ensure that work stops if the temperature is out of specification. The widely observed practice of monitoring and recording the temperature in the chemistry laboratory every day as an exercise and with no justifiable need for operations to be carried out in a specific temperature range and hence no need for a response to the monitoring results is a pointless exercise.

The following general rules should be observed for good practice in general chemical testing work:

a.     Provide segregated areas with their own glassware for the storage of standards and the preparation of concentrated solutions. Operate rules to ensure that only very diluted solutions of standards necessary for calibration of equipment are ever introduced into areas where samples are being handled and processed. Take precautions to avoid spillage of standards, for example by carrying them inside double containers.

b.     Where samples containing high levels and low levels of the same targets are being handled, for example pesticide formulations and samples for residues analysis, carry out the sample preparation work and, where possible, the instrumental analysis in well separated rooms with their own glassware.

c.      Where possible, provide segregated washing up facilities for glassware with segregated uses. If this is not possible, the ensure a management regime such that glassware is not interchanged, for example use clearly labelled baskets to deliver it to and collect it from the washroom.

d.     Enforce good housekeeping and tidiness by general management pressure; have a designated time each week for cleaning and tidying the laboratory.

e.     Have a system for reporting and recording all spillages. Where foreseeable, have a documented procedure for dealing with specific types of spillages. 

6.2.2       Microbiology

In the case of activities such as microbiology, the assessors will look very closely to see that the laboratory is designed so that it is easily kept clean and any spillages can be contained and thoroughly cleaned up. Impervious bench tops with good seals against walls and floor and around fittings such as sinks will be expected. Floors need to be continuous sealed surfaces. In practice, microbiology laboratories will need to be air–conditioned with split type units and all windows sealed to prevent opening. Entry to the laboratory should always be double doored with vestibule and changing/washing area.

An issue often raised in microbiology is that of whether reference culture of organisms should be held for control purposes, bearing in mind the danger of contamination of the laboratory by organisms which are the subject of tests on samples.

There is no doubt that positive controls are essential, so laboratories must work with them. The following precautions are needed, however:

a.     Segregate storage of reference cultures in their own dedicated refrigerators and freezers.

b.     Have a segregated are for handling the references, ideally with a laminar flow cabinet.

c.      Use dedicated laboratory coats and overshoes/shoes for work in the segregated area.

The following will be expected in an accredited microbiology laboratory:

a.     Clear segregation of samples, references, and media storage.

b.     Dedicated laboratory coats and footwear with a changing area where staff can wash.

c.      A planned cleaning regime for the laboratory, covering benches, floors, windows, light fittings, ventilation grills, air conditioner, water baths and autoclaves. The frequency and actual scope of these activities is a matter of debate between different experts, but accreditation bodies will normally have technical guidance documents specifying their particular expectations.

d.     Documented procedures for dealing with spillages and records of spillages and action taken.

e.     Monitoring of temperature and humidity: limits need not to be stringent but humidity above 50% and temperatures above 25oC can lead to problems of mold growth.

6.2.3       Materials and product testing laboratories

Into this category fall activities such as textile testing, leather testing, paper testing and some mechanical and electrical testing. Often the test procedures in these areas require that samples be pre–conditioned and then tested under specified atmospheric conditions. If these conditions are not met, then the test procedure is not being followed and cannot be accredited.

Laboratories working in these fields will need to have specialized equipment to control temperature and possibly humidity. Moreover, the conditions will need to be independently monitored and recorded to provide a record that any given test was, in fact, performed under the correct conditions. The best way to do this is to have a continuous recorder, such as thermohydrograph, so that a chart or computer record is generated which can be archived. Such equipment must, of course, be calibrated.

It is not unusual to find laboratories with temperature and humidity control where the control equipment is only operated during working hours. Often such laboratories have a separate atmospheric control cabinet, which is run continuously and where samples are pre–conditioned.

This mode of operation is possible in an accredited laboratory but is not entirely satisfactory. Moreover, careful records are required as follows:

a.     The atmospheric parameters must be recorded continuously so that the time when they were in specification is clearly established.

b.     The start and end time of each test must be recorded so that it is possible to audit this against the atmospheric condition record.

6.2.4       Calibration Laboratories

Environmental conditions in calibration laboratories are absolutely a key to ensuring that work can be conducted to acceptable and known levels of measurement uncertainty. Most calibrations are affected by temperature and many by humidity. Moreover, calibrations often take considerable time to perform, so stability of environmental conditions is vital to control uncertainty of measurement with reasonable limits.

Typically, the temperature of a calibration laboratory must be stable to within at least one degree centigrade and relative humidity to within three units. This requires high specification air conditioning equipment and control systems and careful design of the laboratory. Normal air conditioning systems in buildings are never adequate for calibration laboratories, and dedicated systems with appropriate capabilities need to be installed locally.

All doors to the controlled area need to be double and, ideally, calibration laboratories should have no windows, especially in tropical climates. Admitting sunlight places an additional load on air conditioning and more seriously, may cause hot spots.

Temperatures and humidity must be monitored continuously by equipment which generates a chart or computer record so that audit is possible. It is also necessary, in calibration laboratories, to maintain the atmospheric conditions continuously. The compromise of running air conditioning only during working hours discussed above, section 6.2.3, for testing laboratories is not acceptable in calibration laboratories. This is because much calibration equipment and, especially, references need to be equilibrated with the laboratory atmosphere so that they are in a stable and reproducible condition. Since many items used, for example reference masses, gauge blocks and standard resistors, are quite massive, such equilibration can require periods of several days of continuous maintenance of the conditions.

In addition to the atmospheric conditions, other considerations will be relevant to some calibrations. For example, mass calibrations require a vibration free environment, and procedures involving electronic measuring and reference equipment may be susceptible to electromagnetic effects which must be eliminated. 

6.3  Access to laboratories and security

A laboratory will need to have a policy on access to laboratories by persons other than those normally working there. This includes members of staff, perhaps working in other sections, management and outside personnel, including customers.

ISO 17025 has little to say explicitly on visitors to laboratories except that in clause 4.7 (service to the customer), laboratories are encouraged to allow clients access to monitor the laboratory’s performance in relation to their work, provided confidentiality of work for other clients is not compromised.

The key factors to consider when allowing access to laboratories are confidentiality and any factors which might affect the validity of data, for example contamination. There should be a policy which ensures that:

a.     It is clear who may authorize access to the laboratory by persons other than staff. This will normally be a senior person in the laboratory, such as the Laboratory Manager. The person giving the authorization should be clearly assigned the responsibility of ensuring confidentiality is preserved. 

b.     It is clear which staff from departments of the organization other than the laboratory and ancillary staff such as cleaners have access. Such access is normally permitted where necessary for efficient working but should otherwise be restricted.

c.      Where there are areas needing to be protected from unauthorized access, for example areas where there are hazards for untrained persons or contamination risks, such areas have active controls and that digital locks with restricted access to codes are widely used.

d.     Where contamination is a possible issue, for example in microbiology laboratories, any visitors to the laboratory are bound by the same rules about wearing laboratory overalls and footwear as are workers in the laboratory.

e.     Any clients or other outside visitors are always accompanied when in the laboratory. Some relaxation of this is normally operated with service engineers who may be working in the laboratory for long periods and where continuous supervision is impracticable. The best practice is to ensure that the engineer is advised of the areas of the laboratory where he or she is permitted access. The engineer should also have a clearly assigned person on the laboratory staff to liaise with or who can be approached if the Engineer needs access to other areas.

f.      It is common to have a signing in system for visitors to laboratories. Often local safety regulations require this. This practice is encouraged since it formalizes access although, of itself, it provides no barrier.

Overall, security of the laboratory is necessary not only to protect confidentiality but also to ensure the integrity of samples and data. There should be a clear barrier between public areas of the organization and the laboratory and, ideally, a physical barrier such as a door with a digital lock with, possibly, a doorbell for non–laboratory staff wishing to enter.

Out of working hours, laboratories should be locked or covered by active security. Samples should be separately secured, ideally in locked storage, and data should, at least, be tidied away into drawers or cupboards. In the case of laboratories where integrity of samples may be called into question, for example forensic laboratories, this is particularly important and a formal chain of custody for samples may be needed.

6.4  Health and Safety

Comfort of staff and compliance with health and safety legislation is not an explicit concern of ISO 17025, although some national accreditation standards include compliance with local laws on health and safety as a condition of accreditation.

On the other hand, assessors will certainly not be impressed with a dangerous looking laboratory nor one where working condition are poor. The rules of most accreditation bodies, moreover, allow assessors to refuse to enter areas which they consider dangerous.


7.  Test and Calibration Methods, Method Validation and Quality Control

7.1  Key Questions

·       Do you have a set of methods specified as acceptable for use in your laboratory?

·       Are they documented to the extent necessary to ensure they are applied properly and consistently?

·       Does your laboratory use standard methods which are published and widely accepted? 

·       If not, do you have evidence to show that the methods you are using are fit for the purpose of claimed and acceptable to your clients?

·       Have determined the accuracy, precision and, where relevant, the limit of detection of the methods which you use, including standard published methods?

·       Do you run routine quality control samples and evaluate the results before releasing data?

·       Do you monitor trends in quality control results to anticipate possible problems?

·       Have you tested your methods and laboratory by use of certified reference methods and/or interlaboratory comparison?

7.2  Choice of Methods

ISO 17025 requires that the laboratory use appropriate methods which meet the needs of the client and, where possible, methods published as national, regional or international standards by reputable technical organizations or in relevant scientific text or journals.

Where it is necessary to employ non–standard methods, these must be agreed with the client and must be fully validate and documented. Laboratories can develop their own methods, but these will have to be fully validated to show that they are appropriate and fit for purpose.

In practice, methods used by laboratories fall into one of three categories:

a.     Standard methods which are published as standard specifications, for example ISO standards, ASTM (American Society for Testing and Materials) and national standards or are published in the scientific literature; where laboratories claim these as part of their scope they must be followed precisely without variation from the published specification. The laboratory will not have to carry out full method validation but will have to have data to show that it can achieve the level of performance which the standard specification claims for the method or, failing that, a level of performance appropriate for the purpose for which the measurement is being made.

b.     Documented in–house methods which are the laboratory’s own methods; these must be subject to a high level of validation. The accreditation body will need to see the validation data, and assessors will have to be presented with data to satisfy them that the method is technically sound, suitable for the purpose claimed and acceptable to clients.

c.      Documented in–house methods based on standard specifications; this category makes up a major part of many laboratories’ scopes since it avoids the commitment of being pinned to the fine print of the standard specification whilst maintaining the credibility provided by the standard specification. Placing an in–house method in this category will generally reduce the amount of validation which a laboratory has to do. The degree to which this is true, however, will depend on the extent of the departure from the standard specification. Care needs to be taken, when reporting data from such methods, to recognize the variation from the standard specification. It is also necessary to ensure that clients are aware of the variation and accept the resulting data as still being suitable for their purposes.

An issue which often arises is where a standard specification has been revised but the laboratory, or its clients, wishes to continue to use the old version. The general rule is that clients of the laboratory who request a test to a particular standard specification are entitled to assume that the laboratory will use the current version and, if it is using an older version, then they must be informed and advised of the differences. Whether to proceed then becomes the client’s decision. On the other hand, if the client specifies an older version, then the laboratory must respect the client’s wishes, subject to the requirements to draw the client’s attention to any limitations introduced by this choice.

Any report must, of course, specify exactly which method was used and note any deviations from the standard procedure.

The accreditation body will often be reluctant to allow an older version of a standard method to be quoted on a scope but there is nothing in the current version of ISO 17025 which precludes this absolutely.

An out–of–date standard should be included amongst the laboratory’s documentation only with care, and the document should be clearly marked with details of when it is appropriate to use it, for example for work for a particular client. The laboratory will have to demonstrate to the assessors that there is no danger of the method being used in error as the current version.

In cases where a client requests a particular method, ISO 17025 place an explicit onus on laboratories to inform the client when they consider the method to be inappropriate or out–of–date. Of course, the client may still insist on the method despite the laboratory’s reservations. In such circumstances, the laboratory may proceed but should advise the client, in writing, of the limitations on the applicability of the data which will result from the choice of method and should reflect its views in any report issued.

The laboratory Quality Manual should have a statement that the laboratory conforms to the policy of using standard methods wherever possible and should quote a list of examples of sources regarded by the laboratory as appropriate to its type of work. There should also be a general policy statement on the laboratory’s perception of its own area of expertise.

7.3  Review of Requests, Tenders, and Contracts

The requirements for recorded contract review is ISO 17025 formalizes the process of interaction with the client on the selection of a method. The onus is on the laboratory to ensure that, as far as it possible to ascertain, the client receives a service which meets their needs. Moreover, the laboratory must be satisfied, before accepting the work, that it has the capability and resources to conduct it. In practice, any responsible laboratory will go through this process anyway, but ISO 17025 requires it to be formalized and recorded.

The sequence of event in contract review should be something like this:

1.     A request is received from the client.

2.     The laboratory determines whether the request is clear in that it either identifies specifically the test or calibration procedure required or make clear the client’s objective in requesting the work.

3.     The laboratory identifies whether the requested work is routine, in the sense that it has a validated, documented, and appropriate procedure. If the work is identified as routine, then all that is necessary is for the laboratory to ensure that it can meet the client’s requirements on turnaround. There might, though, be an issue here if the work requested involves an abnormally large number of samples, for example.

4.     If the work is not identified as routine, then it will be necessary for the laboratory to determine whether it can accept it. This will require an assessment of whether the necessary equipment and expertise is available. A method will also have to be identified and arrangement made to validate it.

In the case of work not obviously routine or where instructions are not clear, however, and always on the initial interaction with a new client, a full review will need to be conducted and recorded. This complete process of contract review will normally involve interaction with the client, culminating in the laboratory communicating its intentions to the client and seeking their approval. All of this must be recorded, including notes of telephone conversations, and correspondence attached. The laboratory should have a simple standard pro–forma for recording the steps in the contract review.

The pro–forma should identify who conducted the review, the client details and contact information, and details of the work requested, either explicitly or be reference to an attachment such as purchase order. A part of the pro– forma should require and record an assessment of whether the work is routine, in the sense that it can use one of the laboratory’s standard methods. If work is identified as routine, then the record can stop there, but the pro– forma should provide for further review and records. These should include identification of the capabilities needed to carry out the work: expertise, equipment, method selection and validation.

Finally, there should be provision for recording the client’s approval of the laboratory’s proposed approach to the work. There are no absolute requirements to seek such approval in writing, but the pro–forma should identify who gave approval on behalf of the client and the means, for example in writing, by telephone, etc.

The Quality Manual should document the procedure for contract review and assign responsibility. The key aspect should be to specify who may conduct the initial assessment of whether work is routine or not. It will then normally be necessary to identify who has the responsibility for determining whether a request for non–routine work will be accepted and to assign authority to evaluate and commit the necessary resources.

Typically, the laboratory’s normal process for receiving requests or samples from clients will from the front end of the process, and relatively junior staff can operate this system and even contact clients for clarification of unclear requests. However, the decision on acceptance of non-routine or high-volume work will usually have to be referred to management.

The standard does not require full contract review process to be conducted every time an individual piece of work is received. It recognizes that repeat work from established clients need only to be subject to contract review at the initial setting up of the programme or, subsequently, if there are any significant changes. For such repeat work, the requirement to record contract review is satisfied by recording the receipt of the work, the date and the identity of the person conducting the work.

It is a common situation for laboratories to receive routine samples from regular clients with little or no information included on the work required. The laboratory knows the clients, knows what they normally want, and the client assumes this. This can lead to questions from accreditation bodies about how the laboratory communicates the clients’ usual requirements to staff receiving and processing samples and, especially, how any changes in the clients’ requirements are communicated. The best way to cover this is for a laboratory to have a set of documents in the sample reception office which show the current requirements for each regular client. These can then be referred to by staff. Such documents will form part of the controlled document system and are updated as necessary when client requirement change.

It may be necessary to revisit the contract review during the work if any significant amendments are required, perhaps because of changes requested by the client or, more commonly, because of problems with test or calibration items themselves. The laboratory is under an obligation to inform the client of any deviations from the contract and to obtain approval. This must be recorded.

A final note on sub–contracting, contract review is required even if the laboratory subcontracts work. In this instance, the review will involve the selection of an appropriate subcontractor and agreement with the client to the subcontract. See Section 15 for further discussion of sub–contracting.

7.4  Method validation

7.4.1       What is Method Validation?

Method validation can be seen as two stage process, with the stages roughly equating to the somewhat outmoded concepts of establishing precision and accuracy. In the first instance, the laboratory needs to establish the extent to which it can reproduce measurements and hence show that it can deliver consistent data within known limits. This is only the first phase, however, since a laboratory which can reproduce measurements well might still have a bias in its data. It could, so to speak, be consistently wrong. To address this issue, the laboratory will have to look outside and test itself against agreed reference points.

In calibration, the equivalent to method validation is the establishment of the “best measurement capability.” This is a measure of the smallest measurement uncertainty which the laboratory can achieve for the specific calibration under ideal circumstances. Clearly the reproducibility of the measurement is a key factor in limiting the best measurement capability, but there must also be a test to establish whether there is any bias which will also impose limitations. The “ideal circumstances” referred to above pertain to a situation where the only source of uncertainty is that arising from the calibration procedure and references, and where there is no contribution from the item being calibrated. Clearly this represents a practically unachievable ideal but is a useful concept in that it sets a lower limit on the uncertainty of the calibration. Any real calibration must have an uncertainty of greater than the best measurement capability.

7.4.2       Extent of Method Validation

The key to determining how much validation is needed for a method is to be found in the “fit for purpose” requirements. The onus is on the laboratory to show that the method as applied by it is suitable for the purpose claimed or demanded by clients.

If the laboratory has devised the method itself, then adequate validation might well be a very complex and involved process requiring a demonstration of the scope of applicability of the method in terms of samples and numerical range, selectivity, robustness in use, accuracy, precision, bias, linearity, detection limit and any other relevant characteristics.

If the method is a standard published method, however, most of these factors will already have been investigated and specified as part of the method documentation. However, whatever the origin of the method, some validation will be required to establish that the performance of the method in that laboratory is satisfactory. Even if typical accuracy and precision data is published with the method, and the method is followed precisely as written down in the literature, a laboratory cannot automatically assume that it will reproduce these figures. There is no guarantee that the laboratory’s skills or the performance of its instruments are of the same standard as those used to generate the standard validation data. The laboratory must always test its own capability directly.

Assuming the method is being applied to the same types of samples and in the same measurement range as specified in the published method, then, as a minimum, the laboratory must determine what its precision and accuracy are for the method and, if relevant, any limit of detection.

7.4.3       Relationship between method validation and quality control

Method validation is typically an exercise undertaken when a laboratory device or adopts a method. Having established the performance characteristics of the method, it is necessary to put measures in place to ensure that the demonstrated performance is maintained in routine use and to detect deviations from the ideal performance. These measures are generally encompassed by the term quality control.

Quality control is a discipline–specific activity but, in general terms, the ideal approach to it is to have samples or calibration items available for which the expected result is known. These are passed through the test or calibration process along with normal items for test or calibration and the data generated from the controls is compared with the expected values. Approaches to quality control and the evaluation of quality control data are discussed further in Section 7.5.

7.5  Assuring the Quality of Test and Calibration results

7.5.1       Use of Certified Reference Materials

An important way for methods to be calibrated by laboratories is the use of certified reference materials (CRM). A CRM is a sample for which the test results are firmly established and agreed, ideally on an international basis. They are sold by some national standards bureau and similar organizations and usually verified by highly respected reference laboratories or by interlaboratory calibration.

Acceptable procedures for certification of reference materials are detailed in ISO Guide 35:1989, Guide to General Statistical Principles for the Certification of Reference Materials. This source also contains much information that can be equally applied in the production of in–house reference materials. The effective use of certified references and the evaluation of data generated by their use is covered by ISO Guide 33, Guide to the use of certified reference materials.

A laboratory which wishes to calibrate its methods can periodically check it performance by testing the CRM and so establish traceability. Laboratories which can achieve correct results with the CRM should, in theory, agree on any other test for the same parameters in the same matrix.

To be effective, a CRM must be typical of the samples which the laboratories are testing on a routine basis. A method of effluents will, ideally, require a CRM which is a typical effluent.

Needless, to say, CRMs must be stable and highly homogenous as well of established composition or properties. This is readily achieved in some areas, such as in the chemical analysis of alloys, the measurement of physical properties such as mass, dimensions, etc., and with some geological samples, but things are not so simple in other areas of testing.

Materials testing, for example, often represents a particular difficulty since samples are usually destroyed in testing. The best that can be achieved here, therefore, is to have many samples from the same source, for example cut from the same plate of metal or drawn from the same concrete mix, and to test a statistically significant number to arrive at an agreed figure for the whole batch.

Microbiology provides a different sort of problem since sample stability is virtually impossible. However, in microbiology there are, at least, certified reference cultures which provide a definition of a particular organisms so that laboratories can verify that their test systems are adequately selective. Some moves are in progress which it is hoped will lead to the ability to prepare quantitative microbiological references. These are generally based on the impregnation of cultures onto plastic supports of controlled porosity.

Where certified reference materials are not available there are several alternative strategies, but the main approach is participation in Interlaboratory Proficiency Exercises – see Section 7.5.5. Such schemes, at least, give a laboratory a measure of its data relative to other similar laboratories and, if organized properly, provide a very effective alternative to the use of certified references. Accreditation bodies will generally expect participation in appropriate schemes but, where certified references are available, these will be expected to be sued as well.

Many basic test methods, especially in analytical chemistry, are intrinsically traceable. There is no need to have a certified reference for most titrations, for example. Here, traceability is provided via the calibrations of the balance and volumetric apparatus. The purist may argue that certification of the purity of the reagent which are weighed or measured is necessary but, provided the origin of the compounds is known and they are of known specification, it would be a harsh interpretation of the standard to insist upon this.

7.5.2       Use of Spikes

Spikes are widely used for method validation and calibration in chemistry and microbiology. They provide a reasonable alternative to certified references, if the spiking material is adequately authenticated, ideally by certification of its purity. On the face of it, a spike has the advantage that the laboratory can spike into a matrix which is typical of its normal sample stream. The counter argument is to question whether a material spiked into the sample artificially is present in the same distribution and speciation as the actual target. The strength of this argument depends on the matrix. A metal ion spiked into a water sample might well be regarded as a valid approach, but a pesticide spiked into a food sample may be questioned on the grounds that the pesticide in real samples was, perhaps, systematically absorbed by the crop used to make the food and so may be bound into the cell structure. However, in complex matrices the spike may be the only alternative, however imperfect it may be suspected to be.

A spike is generated by taking a real sample and adding a known amount of the target in question. Ideally, the base sample for the spike should have little or none of the target before spiking. If this is not possible, the spike level should be large compared to the natural level present. Of course, the natural level must be known in this instance. The spike must be thoroughly mixed and distributed homogenously throughout the matrix.

The spike does not provide true traceability, but it can be reasonably assumed that laboratories which are able to demonstrate good recoveries of spikes have good accuracy and hence will tend to agree.

The use of spike is especially important where laboratories are carrying out tests in complex matrices which may affect the results. Examples are water analysis where matrix effects are common and microbiology where components of the sample may well affect the viability of organisms. The spike, at the very least, demonstrate that the laboratory would detect the material or organism being sought if it were present.

7.5.3       References in Calibration

The process of calibration involves the direct comparison of the item to be calibrated against a reference. It is, therefore, the reference itself which provides the guarantee of accuracy, and so it is critical that the reference itself is maintained and checked regularly.

This will often only be possible by sending the reference for calibration. However, in many instances, the calibration laboratory can work with a hierarchy of standards whereby a reference standard is maintained and used only for occasional checks on working standards. In calibration of balances, for example, this is common. The laboratory will have a reference set of masses and, possibly, several working sets which are compared with the reference regularly. The reference is, perhaps, calibrated externally annually.

7.5.4       The use of Quality Control samples

Quality Control samples are used in the same way as spikes and CRMs. They are merely samples for which the laboratory has established values and acceptance limits. They are tested along with unknown samples as a performance check. The laboratory may establish the values of the analytical quality control samples by repeated testing, but they should, ideally, be confirmed by at least two other laboratories.

If possible, quality control samples should be calibrated against CRMs. In this instance they become transfer standards, and the quality control sample provides traceability. This strategy is frequently adopted when expensive CRMs are needed since the laboratory can use the quality control sample routinely and check it only occasionally against the expensive CRM.

ISO Guide 35 is a useful source of information on procedures for validating in–house quality control samples and confirming their homogeneity. Although, strictly speaking, the Guide is intended to refer to the production of CRMs, similar principles apply to the production of in–house reference materials for use as quality control samples.

As with the spikes, quality control samples which are not calibrated against CRMs do not provide traceability in themselves but demonstrate consistency of performance on the part of the laboratory. Such consistency, when combined with satisfactory results from interlaboratory exercises showing that the laboratory normally agrees with its peers, comes a very close second to establishing true traceability and is, in many situations, the only possible option.

The laboratory will need to have a policy on the use of quality control samples and for evaluating and responding to quality control results. Guidance on this is given in Section 7.6.

7.5.5       The role of Interlaboratory Comparison

Laboratories using spikes and other forms of analytical quality control samples are effectively monitoring the consistency of their own performance. They should have very good picture of their internal reproducibility. If they perform well on spikes and on any CRMs, they have also every reason to believe that their results are accurate.

Nonetheless, it is in the interest of any laboratory to test this assumption from time to time by exchanging samples with other laboratories and comparing results.

Such exercises are a very effective extension to the internal quality control programme of laboratories. They also provide an element of traceability when CRMs are not available since the more laboratories agree on results and the wider the range of samples on which they agree the more certain everyone can be of the accuracy of the collective results. This is further reinforced if agreement spans several analytical methods for the same determinant.

Interlaboratory studies may be informal, in that a group of laboratories will exchange samples on an ad hoc basis or may be formal exercises organized by a third party who circulates performance indicators. Irrespective of how it is done, the crucial part of the exercise is that the laboratory uses the data. This means reacting positively to any results which indicate that it is not performing as well as the other participants and carrying out remedial action.

Participation in appropriate Interlaboratory Proficiency Schemes will normally be required by accreditation bodies, and some bodies operate their own schemes. Although interlaboratory comparison is listed in ISO 17025 as only one of several quality maintenance options, most accreditation bodies will insist on its use wherever possible. If there are schemes available in a laboratory’s sphere of activity or opportunities for ad hoc exchanges with other laboratories operating in the field, the accreditation applicant will have to provide good reasons to the accreditation body for not participating in these activities. The accreditation body simply sees inter–comparisons as the most stringent test of a laboratory and wants to see the laboratory subject itself to such a test.

The recently changed world security situation has resulted in severe difficulties in shipping samples for Proficiency Testing across national boundaries, especially by airfreight. This has, unfortunately, coincided with an increase in the insistence, by accreditation bodies and particularly by the regional laboratory accreditation conferences, on Proficiency Testing as an activity for accredited laboratories. As a result, it is becoming almost essential for any national accreditation body to ensure that adequate Proficiency Testing is available within the country before it can seek international recognition through MRAs and conference membership.

Accreditation will not normally be conditional upon any level of performance in interlaboratory comparison, but what will be required is for the laboratory to have a documented procedure for evaluating the results from its participation and for responding to any problems revealed. There must also be records showing that the results were evaluated and what action was taken to remedy problems.

The accreditation body will not withdraw accreditation based on isolated instances of poor interlaboratory proficiency performance. However, if a laboratory is consistently failing in Proficiency Testing and not taking effective corrective action, accreditation will certainly be jeopardized.

In the case of calibration laboratories, it is usual for accreditation bodies to organize interlaboratory comparisons. These are normally referred to as measurement audits.

7.5.5.1  Procedure for Evaluating Performance in Interlaboratory Comparisons. 

This activity must be formal, as noted above. A suitable approach is for the senior laboratory staff to meet and to evaluate performance. The evaluation should consider the laboratory’s known uncertainty of measurement. Whenever the laboratory determines that the result which it has returned differs from the expected result or from the mean obtained by other participating laboratories, then an investigation to establish the reason for the problem and the initiation of appropriate corrective action is called for, together with measures to check that the corrective action was effective. See Section 4.8 for discussion of corrective action.

The meeting should be minuted and the agreed action and its expected outcome, recorded. Even if the meeting concludes that the laboratory's performance was satisfactory, a record must be made.

The accreditation body will expect to see a file of the reports on interlaboratory comparisons and, with each report, a record of evaluation of the data (the meeting minutes) and a report on any corrective action.

7.5.6       Other methods for assuring confidence in results.

In the case of many methods, neither certified reference materials nor effective spikes are available. There may, however be what are often referred to as consensus standards, recognized by all parties concerned. These would include many industry standards, such as those used in, for example petroleum source rock analysis or colour fastness measurements. Such standards may not be traceable in a strict sense but are used to ensure consistency of data within the industry sector and hence form a basis for agreement when testing against product quality standards.

There is another approach to testing that is also recognized as a means of providing confidence in results, a well-established one in analytical chemistry: determinations by different methods which lead to comparable answers, which are generally persuasive.

Repeated determinations are also used to provide confidence in results. Such confidence may be misplaced since errors may be repeated, especially systematic errors resulting from poor design of the system or errors in making up reagents. Where repeat determinations can be valuable is if samples are retained over a relatively long timescale and then resubmitted, ideally blind to testing personnel, to check the consistency of the data.

Correlation of results from different characteristics of an item is also mentioned as a quality assurance method in ISO 17025. The extent to which this is relevant will depend not only on the type of testing being carried out but also on whether the client has requested a range of tests rather than isolated test. There are no explicit requirements for the laboratory to do additional tests to provide data for such correlations. Most laboratories will scrutinize data in this way where they can, for example in water analysis to confirm pH, the hardness, alkalinity, conductivity, and dissolved solids present consistent picture. This type of scrutiny should, however, be systematized and there should be guidelines in the methods documentation on the criteria to be used in the assessment so that it is made consistently.

7.5.7       Reference Methods

In the case of some determinants, different methods may give different results. This may arise because all the available methods are imperfect or because the method effectively defines what is measured.

BOD is an example of the latter, and BOD5 at 20oC with measurement by Winkler titration is generally accepted as the definitive method. Many foods proximate analyses also fall into the same category. Measurements of “fat” or “fibre”, for example are clearly method–dependent since they do not measure precisely defined chemical species. If “fat” is determined by weighing the material extracted with pentane, for example, the definition we are adopting for “fat” is that material which is extractable by pentane.

In these cases, the “correct” result is defined in terms of a reference method which is tightly specified, and traceability effectively means traceability to the reference method. Laboratories using methods other than the reference method should calibrate them against the reference method from time to time and would be under an obligation to demonstrate that any method which they choose to adopt gives data comparable to that from the reference method.

Even where targets are clearly defined, it may be necessary to agree on a reference method. This will arise when there are several methods which typically return different results.

Ideally, the technical problems implied by the inconsistency between methods should be resolved, the best method chosen, and the rest discarded. Sometimes, however, it is not possible to come to a definitive answer on which method is technically superior and, especially where enforcement is the issue, one method is arbitrarily defined as giving the correct result to solve the impasse.  

Under these circumstances, interlaboratory calibrations are essential since the method defines the refence values and this is meaningless unless all participating laboratories can produce results which agree when all use the reference methods.

7.6  Application of Quality Control and managing response to results

Laboratories have traditionally approached quality control by including items of known properties in each test or calibration batch and evaluating the results against defined criteria to decide whether the data for the batch should be rejected or accepted. This approach has the virtue of simplicity and, provided the accept/reject criteria are set properly it will defend the laboratory’s contention that released data continues to meet defined performance characteristics.

However, if laboratory only used its quality control data in this fashion, it is failing to make full use of it. A laboratory might have situation where all the quality control samples are producing data which falls within the acceptance limits but always on one side, for example they may be consistently high relative to the expected value. This situation bears investigation since there should be a random scatter about the expected value. This bias, therefore, gives an early warning of a problem with the test or calibration system. What is useful is that the problem has been detected before the data is compromised.

Increasingly, assessors expect laboratories to make us of their quality control data in this fashion. Paragraph 5.9.1 of ISO 17025 makes explicit reference to recording data in this fashion. Paragraph 5.9.1 of ISO 17025 makes explicit reference to recording data in such a way that trends can be detected, which strongly implies the use of control charges. The next section introduces the use of control charges which might be adopted by a laboratory which is new to the area of statistical quality control.

7.6.1       Statistical Quality Control and Preventive Action

There should be an active coordination of quality control with a regular review of performance, and records should be kept of the results of reviews and any action taken in response. This should provide a mechanism of anticipating problems with methods before they affect quality and as such is an important contribution to preventive action. See section 4.9 for further discussion of preventive action.

The management should collect the results from quality control samples for each method and plot the data on a control chart. The most common control chart format is the Shewhart Chart where the difference between the expected and found values for the quality control samples is plotted against time.

The Shewhart Chart (see next page) gives a general visual indication when any systematic drift in the values returned by quality control samples is setting in. A method which is being used under control should show a random distribution of the actual values about the expected result, and any trends developing which suggest a bias should be investigated.

It is usual to mark the control chart with a warning limit at two standard deviations (2δ) and an action limit at three standard deviations (3δ). The standard deviation is generally derived from the performance data determined at method validation.

The laboratory should, wherever possible, have a documented policy for deciding when the chart indicates a condition where the method should come under investigation, and this policy should be expressed quantitatively so that it will be applied consistently. In setting rules for a particular method, the following widely accepted practice should be borne in mind.

o   Approximately 5% of results may be expected to fall outside the 2δ warning limits.

o   Any result outside the action limits requires investigation.

o   Two consecutive results outside the warning limits need an investigation.

o   A consistent run of successive results on the same side of the expected value should be investigated. It is widely accepted practice that a run of eight such successive result triggers an investigation although many laboratories would feel the need to respond rather earlier than this.

Consideration should be given to updating the control limits at regular intervals. If the method is consistently delivering data inside the current warning limits, then this may indicate that the uncertainty being achieved in practice is improving relative to the data generated at validation. Conversely, if the method is slipping out of control too often, this suggests that the validation data is presenting an optimistic picture.

A common rule of thumb is to review the data every sixty points. If 1 – 6 (inclusive) points are found outside the 2δ limits, then the indication is that the limits are satisfactory. If more points are found outside, then the limits are optimistic and either they should be revised, or the method investigated to bring it back to a level the performance required. If no points are found outside, then 2δ new limits should be set reflecting the enhanced performance. Some laboratories take the view in this last case that they will not reduce the limits since this will result in increased rejection of data. In these circumstances, the decision on whether to revise to tighter control limits will be determined by whether the un–revised limits are acceptable in the sense that they indicate that the method is fit for purpose.

7.6.2       Frequency of Quality Control checks

There is no simple answer to how frequently quality control items should be run. The trite answer is as often as necessary.

A general rule of thumb is that they should be included at a minimum rate of one quality control items in twenty but ideally one in ten. Experience of a method in a particular laboratory may indicate that more frequent checks are required.

In the case of methods which involve batch treatment of items, at least one quality control should be present in each batch.

Some items can be tested or calibrated in duplicate as a check on the reproducibility of the method. It is far more useful, however, to expend the same effort in testing another quality control standard since this not only checks consistency but also gives information on overall error.

A method which can only be controlled by a high frequency of quality control checks should be looked at very carefully and seriously considered for replacement by a more stable method.

7.7  Documentation of Method

Irrespective of whether the method is in–house or standard, the staff must have documentation to enable it to be applied properly and consistently. In the case of standard methods, this may be covered by providing staff with access to the standard specification. It will, however, normally be necessary to supplement this with instructions on the use of models of instrument and with information on local quality control regimes and the quality control data to be collected.

Another common situation where supplementary documentation is likely to be required is where a standard specification requires choices of procedure, based, for example, on sample or calibration item type. The laboratory must ensure that the option chose will be selected consistently irrespective of the person doing the selecting. It may, therefore, be necessary to provide guidance on how to make the choice in the supplementary documentation since standard specification are frequently less than explicit in this area.

In–house methods will need complete documentation; section 7.8 contains a suggested format. This can also be used as a checklist for determining whether published documentation is adequate. If it does not cover all the points noted in Section 7.8, then any omissions will need to be provided for in in–house–generated documentation.

Where the method is an in–house method based on a standard specification, there will need to be documentation specifying the variations from the standard and cross–referring to the specification.

All the documentation of methods must be issued as controlled documents. This is typically done by compiling a methods manual consisting of in–house methods documentation, any supplementary documentation for standards methods and a list of standard methods used by the laboratory. The methods manual should also contain information on where the standard specification can be found in the laboratory. It will normally also need to refer to the appropriate instrument manuals and instructions.

7.8  Suggested format for in–house methods documentation.

Each page of the method must show the method number, the date of first issue and the date of the current version. Pages should be numbered in the format page…of…. The method number is of critical importance since it provides an unambiguous identifier for the method.

A suitable arrangement for the page header is:

Method No.: M/0001

Page 1 of 10

Fist issued February 1066

This issue: March 1993

The following sections should be included in the documentation except where the Quality Manager decides a section is inappropriate.

7.8.1.1  Title 

The title should be brief but must include a reference to the property to be measured or the calibration objective. 

7.8.1.2  Scope

This should clearly identify the range of items to which the test or calibration is applicable and any limitations on the range of any parameters which are measured, for example, suitable for measuring lead in wastewaters in the range …. to … ppm.

7.8.1.3  Principles of the Method

A brief description of the principles behind the measurement or calibration must be given; for example a coloured complex is formed between the metal ions and dithiozone. The concentration is determined by comparison of the absorbance of the solution of 259 nm with the absorbance produced by solutions of known concentration.

7.8.1.4  Sample requirements for test methods

The type of sample to which the test can be applied must be noted here. This section also contains instructions for any special sampling techniques, sample handling and preservation, sample preparation or pre–treatment required. Alternatively, it can refer to other documents in which these procedures are described.

7.8.1.5  Materials

Any materials or consumables used by the method must be specified together with any required standards of purity or performance. Any quality checks on reagents must be described or the method for carrying out the checks must be referred to. Avoid referring to specific supplies or products in this section, unless the source is critical to obtaining the correct quality of material, otherwise you run the risk of having non–conformance merely because your usual supplier has no stocks, and you used an alternative.

7.8.1.6  Equipment and Calibration

A brief description of the equipment must be given, with instructions on whether calibration is required before each use and how this calibration is to be carried out. Calibration instructions need not to be explicitly included in the method, but a reference where they can be found is then essential.

Routine calibration as described in the equipment log need not to be covered here; only such calibration as is part of the method need to be included. Instruction to check that calibration markings and labels are up to date is a wise precaution, however.

Reference standards, including any certified reference materials, required for calibration should be specified. It is also appropriate there to specify any quality control standards used and to indicate the basis of their calibration, for example by checks against certified reference materials.

7.8.1.7  Setting up and Checking

Instructions for setting up the equipment must be given here, followed by instructions for any checks required to confirm that the equipment is operating properly prior to use. The criteria for passing the tests must be given and instructions included on the information to be recorded about the tests.

It should be clear from this section what action is required when check criteria are not met. This need not to be a detailed description of how to remedy problems but might refer to a manual or merely instruct the user to refer the problem to, for example the Laboratory Manager.

7.8.1.8  Environmental Factors

Any environmental variables which should be considered or measured and recorded as part of the test or calibration must be noted. This would be relevant, for example, in the case of most calibrations and in materials testing where certain ambient temperature ranges may need to be adhered to for the test or, possibly, the temperature of the test may need to be recorded in the report.

7.8.1.9  Interferences

Any interferences, for example spectral, chemical, physical, etc. which might affect the results should be detailed with any precautions to be taken to minimize such effects.

7.8.1.10         Procedure

A detailed description of the procedure must be given, including any quality control measurements required, for example duplicate or reference measurements. The level of details is difficulty to specify for any test or calibration, but the assessors will have to be satisfied that the description defines the procedure adequately to enable to be carried out in a consistent manner by different staff. You can, of course, assume that the staff have been trained. There is no compulsion to attempt to produce a description that could be followed by a raw recruit.

7.8.1.11         Recording Data

This section must give precise instructions on the data to be recorded from test/calibration items and for quality control. The format of any tables for results must be specified. Where worksheets are used, an example should be included with specimen data filled in.

7.8.1.12         Calculations

Full details of any calculations to be carried out must be included, with instruction on how calculations are to be checked, for example by spreadsheet, there should still be a description of the calculations required and a clear identification of which sheet is to be used, for example file name.

7.8.1.13         Quality Assessment

This section must precisely specify the criteria to be used to judge when results meet the necessary quality standards. This may include details of the correspondence required between duplicates or the values required to be returned for quality control. The objective, again, is to achieve consistency. There should be enough detail here to ensure that any person using the guidelines will come to the same conclusions. This normally means defined quantitative criteria or reference to rules for interpreting statistical quality control data.

Instructions on the response required to a failure in quality control must be given. This may simply be a requirement to rerun the test or calibration. Where this is not technically possible, it will normally be necessary for the Laboratory Manager to decide and, in most instances, to contact the client.

7.8.1.14         Performance Characteristics (uncertainty)

The known performance characteristics of the method should be given. This will generally be determined when the method was first validated but, where values are subject to review as part of the quality system, it may be necessary to refer to another document, for example records held by the Laboratory Manager on current performance.

Either uncertainty of measurement must be specified, or instructions provided on how this is to be calculated in any instance.

7.8.1.15         Reports

The data to be included in the formal report which will be sent to the client must be described. This section should include details of the units to be used and any qualifiers to be added to reports, for example uncertainty estimates. This section is not necessarily relevant to the person carrying out the test but is necessary to have a complete specification of the test for audit purposes.

7.8.1.16         Safety

Any safety precautions to be taken and any hazards known to be associated with the method must be specified. An ISO 17025 assessment does not deal with safety, but the inclusion of safety information in methods in generally regarded as good practice.

7.8.1.17         Site Use

Where methods are carried out on site, any special precautions needed to ensure that data is valid must be noted. This should include checks on instruments or references to confirm that they have not suffered in transit. If site checks are not possible, then the equipment should be checked before leaving the laboratory and immediately upon its return.

7.8.1.18         References

Reference must be made to any standard specifications or published methods of relevance. Any manuals, technical documents or other relevant sources of information must be listed.

7.8.1.19         Authorization

The signatures, with dates, of the Laboratory Manger and Quality Manager, with dates, accepting the method for use, must appear. The Laboratory Manager is responsible for ensuring that the method is technically sound and that all relevant validation has been carried out and evaluated. The Quality Manager will normally carry out a check to ensure that all of this has been done and will, in addition, check that the level of documentation and tis content complies with the requirements of the quality policy as expressed in the Quality Manual.

7.9  Authorization to deviate from documented procedure

It is inevitable that occasions will arise when the documented procedure cannot be followed exactly. This normally happens when samples or calibration items are untypical and technical adaptations must be made.

This is not a problem provided that the decision to deviate from the documented procedure is made by an appropriately qualified person and that the details are recorded. If relevant to the interpretation of the results, the deviation must also appear on the report. In practice, all such deviations are likely to be relevant and so need to appear on the report except in clear and exceptional circumstances.

The laboratory must document in the Quality Manual who is authorized to approve deviations from standard methodology. This should normally be at the level of professional staff or even the Laboratory Manager. The objective should be to ensure that the decision is not made by a junior staff member who may well not understand the full technical or service implications. The person authorizing the deviation should be made responsible for ensuring that the necessary records are made.

Considerations should also be given to whether a deviation needs to be discussed with the client of the contract to review requirement – see Section 7.3. This is will normally only be necessary where the deviation affects the utility of the data, for example when it calls into question whether the test method can still be regarded as complying with a standard specification.


8.  Equipment

8.1  Key Questions

·       Do you have a system for commissioning equipment and verifying its performance and calibration before it is used for test or calibration work?

·       Do you have a plan for periodic calibration and verification of the performance of all equipment which affects the validity of measurements?

·       Do you have records showing that this plan is followed, and which enable the status of any equipment to be verified at any point in its history of use?

·       Is equipment subject to regular checks or calibrations labelled so that its status can be seen immediately by users?

8.2  Equipment Records

ISO 17025 effectively requires a complete history of each piece of equipment. This should start with details of the checks and calibrations carried out before the equipment is placed in service and continue with a detailed record of all calibration, repairs, routine maintenance, and performance checks.

In this context, “equipment” should be understood to encompass any items which may affect the validity of measurements or calibrations, including reference standards of measurement, such as standard weights and reference thermometers.

The best way to keep these records is to institute an equipment log for each item which is, ideally, kept in the laboratory next to the appropriate equipment. In some cases, it may not be practical to keep the log next to the equipment, but it should be close by and readily accessible. Experience shows that maintenance actions are more likely to be recorded if the log is at hand. If staff have look for it, they may out of making the record and perhaps forget altogether. In addition to the logs for each piece of equipment, the management should hold a master list of all the equipment, the management should hold a master list of all the equipment.

There is not absolute requirement to issue a number to each piece of equipment, but this is strongly recommended, especially when several units of the same type are in use. Unique numbering of equipment by the laboratory avoids confusion. Serial numbers issued by manufacturers can be used, but these are often log and cumbersome and frequently not easily accessible.

8.3  Commissioning of new equipment

All new equipment must be checked for correct functioning before being placed in routine service. This should include checks against the manufacturer’s specifications and checks to confirm that the equipment gives satisfactory results when used to make the measurements for which it is intended. Where equipment needs calibration, this must also be done before it is put in service.

In this context, note that the fact that a piece of equipment is new does not mean it does not need calibration. Unless it is supplied with a certificate showing traceable calibration, it must be calibrated before it is used for the first time. Additionally, note that some piece of equipment, for example balances, must be calibrated in situ, so even if these are shipped with a factory calibration certificate, calibration after installation and before use will be essential.

Where the equipment replaces or duplicates existing equipment, the checks should include a comparison of the results from each unit to establish the variations which might result.

The basic details of the equipment and a report on the commissioning checks should be recorded as part of the equipment log. Supporting evidence such as results and instrument output should be attached. The Laboratory Manager should also approve the checks and sign to accept the equipment into service.

Equipment undergoing trials must either be segregated or clearly labelled as not to be used so that there is no possibility of its being inadvertently used for routine work until it is formally accepted.

ISO 17025 paragraph 5.5.5 gives a list of information which must be on record for each piece of equipment. This information should appear as the initial page of the equipment log.

8.4  Service and Calibration Schedule

Before introducing any piece of equipment into service, the management should decide upon a service/maintenance, calibration, and performance checking schedule. This will normally be a combination of service from the supplier and in–house checks and calibrations. There is no need to have outside service or service contracts, but there would be a general onus on the laboratory to satisfy any assessors that the arrangements are adequate to ensure proper and reliable functioning of the equipment.

The proposed regime should be recorded in the equipment log and approved by the Quality Manager or other identified person as being technically acceptable and compatible with the quality policy. Having a definition of the proposed service/calibration schedule easily accessible in the log makes auditing easy since the proposed regime can be quickly checked against the actual records, which are also in the log.

Servicing and preventive maintenance should be recommended by the manufacturer who may also be able to carry out calibration checks and adjustments. Section 9.5.2 below deals with strategies for deciding on calibration intervals. Whichever strategy is used, the approach should be conservative to pick up any calibration problems before they affect data quality.

In–house checks or equipment should be scheduled to cover the gaps between any service visits and calibrations. Some equipment is effectively checked each time it is used by means of reference samples which are run as part of quality control. Maintenance and calibration are then carried out on an as– needed basis when these checks show a performance deterioration. This is perfectly legitimate strategy provided that the results from the reference samples are recorded either in the equipment log or along with the analytical data.

Do not, however, lose sight of the fact that such checks can conceal underlying deterioration in performance. For example, a colorimeter may give perfectly acceptable results even if its wavelength calibration has shifted provided it is calibrated with the standards at each use. However, if you are no longer taking the reading at the absorbance maximum, your detection limit will certainly be degraded, and precision will also suffer in most instances. Gas Chromatography detectors can be coerced into performing by turning up the amplification, but this does not alter the fact that, as the detector becomes dirty, detection limit, signal to noise and dynamic range performance will be degrading.

For this kind of reason, most equipment will require some formal checks even if it is effectively checked with the standards at each use.

8.5  Responsible persons

It is a good idea, especially in larger laboratories, for the management to appoint an individual to be responsible for each piece or class of equipment and this person should have a deputy. The responsible person will have a watching brief over the equipment and will be responsible for ensuring that the necessary maintenance, calibrations, and checks are carried out and recorded.

8.6  Routine operation of the equipment log

Every action taken – supplier’s service, in–house trouble shooting, routine checks, etc. – must be recorded in this equipment record. Any supporting documentation, such as service reports, calibration certificates and output from performance checks, should be attached to the record. This document should become a complete history of the equipment so that its state of calibration and performance at any point in time can be demonstrated.

8.7  Other components of the equipment log

In addition to the records, the equipment log can usefully contain, or be kept with, a copy of operating procedures for the equipment, including the manufacturer’s manual. If it is not practicable to keep this information as part of the equipment log, then the Log should give the location of the operating instructions and manuals.

In cases where equipment operation is described adequately in the methods documentation, there is no need to repeat this information in the equipment log.

8.8  Smaller items of equipment

All equipment which affects the validity of measurements will have to be recorded, but for smaller items it is not essential to have a full equipment log. Examples would be such things as thermometers, volumetric glassware, timers and even balances. In these instances, a composite log covering, for example, all the laboratory’s thermometers, would be appropriate.

8.9  Equipment labelling and sealing.

Each piece of equipment which is subject to regular checks or calibration should carry a conspicuous label which shows the date when it was last checked or calibrated and the date when next due. This should be signed by the Laboratory Manager or the person responsible for the calibration.

The staff should be instructed, via the quality manual that they must not use any equipment where the label shows that it is overdue for a check or calibration.

Where there are limitations on the calibration of equipment, for example, if it is not calibrated over its full range, there should also be a label indicating the limitations.

In some laboratories there may be equipment which is only used for indication purposes and so is not rigorously calibrated. Such equipment should carry a label showing that it is not calibrated and hence not to be used for measurements where traceability is required. Examples might be rough balances or timers used in undemanding applications.

ISO 17025 is subject to some different interpretations in labelling of uncalibrated equipment. Some schools of thought regard it is as fatuous to label an item as uncalibrated. In practice, however, the onus will be on the laboratory to satisfy assessors that there is no danger of confusion leading to the sue of uncalibrated equipment when calibrated equipment is required. The simplest way to achieve this is to label all indicator equipment as such.

Equipment which is effectively calibrated at each use should carry a label to this effect with a reference to the calibration instructions, which might be within a method description. An example would be a pH meter which might labelled “CALIBRATE AT EACH USE, Ref Method CA/001.”

Some equipment is difficult to label in the conventional sense. Volumetric flasks and thermometers can be a particular problem. There is still a requirement to mark with the calibration status, but this could be by means of a color code or other marking. For example, all calibrated thermometers could have a piece of distinctively coloured tape would round their stems and the laboratory could have a notice saying, “CALIBRATED THERMOMETERS ARE CODED BLUE – Expiry Date 31st May 2007.”

Where equipment which is calibrated or verified at regular intervals has adjustment screws which should only be adjusted as part of the calibration procedure and are not needed in normal use, these should be sealed in some way, for example by signed labels, to prevent tampering, or at least prevent any tampering being unrecognized.

8.10                   Equipment in use before formal records are implemented.

In practice, most laboratories will have an inventory of existing equipment before implementing equipment logs. Reasonable effort should be made to retrieve information to set up the logs, but assessors will recognize that not every piece of information will be available. For example, a reasonable estimate of the date when the equipment was received will be acceptable, and there is no need to for extensive research to establish a precise date.

Commissioning test information will similarly be typically absent, and the relevant part of the log should simply be endorsed: “EQUIPMENT IN USE AT IMPLEMENTATION OF RECORDS,” or some similar wording. Ongoing checks and calibration will establish the equipment’s integrity from now on.

If you do have any historical information on the equipment, however, for example copies of service reports, calibration history, commissioning reports, then include these in the equipment log. 


9.    Traceability of Measurement

9.1  Key Questions

·       Have you identified all the measuring equipment, which is involved, directly or indirectly in measurement or calibration and which, if not properly calibrated, would affect the validity of measurements?

·       Is this equipment calibrated in a manner which provides traceability to the international measurement system?

·       Do you have a management procedure to ensure that the calibration is always maintained, i.e., recalibration is conducted as necessary and, where possible, equipment is monitored so that any drift away from calibration will be detected?

·       Do you have records which could be audited to confirm the calibration status of the equipment at any point in the past?

9.2  Meaning of Traceability

Basically ISO 17025 requires that a laboratory has in place a calibration system which ensures that, within known limits of uncertainty of measurement, any tests or calibrations which it makes are comparable with those of any other laboratory. The key element in achieving this is to ensure that all equipment, in all laboratories, which has an impact on the validity of calibrations or tests is calibrated in such a manger that there is an unbroken chain of comparisons which lead from the equipment to a recognized international standard of measurement. Wherever possible, this international standard is required to be the corresponding SI unit of measurement.

The ideal way in which the system works in practice is that a country establishes a national metrology system where a central metrology laboratory holds the national standards for all measurements. This central laboratory establishes a link into the international measurement system by, from time to time, checking its standards against those of other countries and participating in interlaboratory measurement audit exercise. In the latter case, the laboratories circulate references, for example a mass or a thermometer, and all compare them with their own standards, so establishing a basis of agreement, or otherwise, between the national calibration laboratories.

Laboratories and industry requiring calibration can then go their own national metrology laboratory to have their equipment calibrated in the knowledge that the calibration is internationally traceable.

Such national metrology systems do not exist in all countries, and in these cases, it will be necessary for laboratories seeking ISO 17025 compliance to establish traceability by having calibrations performed by agencies outside the country which are able to provide the necessary traceability. These could, for example, be a national metrology laboratory in a nearby country.

Some equipment can be sent to the calibration laboratory for calibration and then shipped back to the laboratory, but many systems are either too bulky for this approach or they need calibration on site, for example balances, as calibration is invalidated by their being moved. This inevitably means bringing calibration personnel and references to the laboratory site.

This extra–national approach to traceable calibration is both inconvenient and expensive so it is in the interest of countries seeking to establish a network of ISO 17025 compliant testing and calibration laboratories to seriously consider establishing a national metrology system.

9.3  Acceptability of calibrations to accreditation bodies

It is obviously of crucial importance to laboratories seeking ISO 17025 accreditation that they derive their calibration in a manner which will be acceptable to the accreditation body. The mere existence of a national metrology system claiming international traceability will not necessarily guarantee such acceptance.

For an accreditation body to be satisfied with a calibration, they will need to know that the calibration service complies with all the key requirements of ISO 17025. Ideally, the calibration service, even if it is a national metrology laboratory, should be accredited to ISO 17025 by an accreditation body acceptable to the body being engaged to assess laboratories which depend on the calibrations.

If the calibration service is not accredited, then the accreditation body will normally need to investigate it to satisfy itself that the calibrations are adequate, and that traceability is intact.

Unfortunately, different national laboratory accreditation bodies have widely divergent views on the calibrations which they will accept, broadly speaking, it is more likely that calibrations will be accepted if obtained directly from a national metrology laboratory than if from a commercial calibration service drawing its calibration from the same national metrology laboratory.

Around the world, some national metrology services have established reputations, but even this does not guarantee acceptance by any specific accreditation body unless there a formal mutual recognition agreement. However, calibrations from the national services in the Republic of Singapore, the Republic of Korea, the Republic of India, the Kingdom of Thailand, most EU countries, Japan, Australia, New Zealand, the Republic of South Africa, and NIST in the United States of America are generally well regarded at the time of writing.

The key issues on which the accreditation body will need to be satisfied with in any calibration are as follows:

a.     That the references used are properly calibrated and provide international traceability.

b.     That the calibration procedure being used are scientifically sound, of known performance characteristics, for example uncertainty of measurement, and subject to proper quality control.

c.      That staff carrying out the procedure are properly trained and competent in the calibrations performed.

It is strongly recommended that any laboratory intending to use a particular calibration service, even a national metrology laboratory, enter a dialogue with the body chosen as a potential accreditor for the laboratory along the following lines:

a.     Determine whether the calibration service has ISO 17025 accreditation and who its accreditation body is. Ask the proposed accreditation body for your laboratory whether they have mutual recognition for calibration with the accreditation body of the calibration service.

b.     If the calibration service is not accredited, then ask the proposed accreditation body for your laboratory whether they have any policy on the acceptance of calibrations from the proposed calibration service.

c.      If the issue is still unresolved, ask the proposed accreditation body for your laboratory what information they would require deciding on the acceptability, or otherwise, of calibrations from your proposed calibrations service. What they will normally ask for initially is examples of calibration certificates; information on how the calibration service establishes its traceability; what arrangements the calibrations service has for measurement audit or inter–comparisons with other calibration bodies; and whether it has a pro–active and audible quality system.

Essentially, the rule is to establish, at as early a stage as possible, that your proposed accreditation body will be likely to accept the calibrations you are proposing to rely upon. There is no point in spending time, money, and effort on setting up calibrations which are simply rejected at assessment.

9.4  Measurements not traceable to SI Units

9.4.1       Industry and Consensus Standards

There are many areas of testing, where measurements are not strictly traceable to an SI unit. Many types of product testing fall into this category and rely not on a fundamental unit but on a recognized industry standard. For example, colour fastness in textiles is measured relative to a specific standard blue cloth. There is no SI unit for colour fastness. Similarly, measurements such as abrasions and pilling resistance rely on comparison with standard samples or photographs of samples. The basis of the comparison is universally accepted but it is not a fundamental unit as such.

ISO 17025 allows for this in testing and in calibration through paragraphs 5.6.2.2.2 and 5.6.2.1.2 respectively. Here the idea of traceability to a certified reference material, a reference method or a consensus standard is accepted where relevant.

A difficulty which sometimes arises in this area is that some accreditation bodies persistently refuse to apply these clues of the standard and insist that they will only offer accreditation for calibrations and measurements which are traceable to SI units. Strictly speaking, this is a practice discouraged by ISO 17025, which frowns upon extension to the standards, but nonetheless it is a present problem for laboratories seeking accreditation. If you have tests which fall into this category, then it is advisable to determine the attitude of your possible accreditation bodies before becoming committed to them as accreditor.

9.4.2       Reference Materials

In addition to industry standards there are increasing numbers of what are typically referred to as certified reference materials which are used to establish accuracy in measurements or calibrations. Some discussion of the use of certified references has already taken place in Section 7.5.1. Such materials have a relevance, however, to traceability of measurements since they effectively represent a sample for which the correct answer is universally accepted. In this case, measurements are traced back to the certified reference rather than an SI unit but provide the same basis in that they establish comparability between different laboratories.

This is particularly common practice in chemical analysis where, strictly speaking, the fundamental SI unit is the mole. However, realization of the mole for every analytical target is hardly a practical proposition so the certified reference material serves as a suitable alternative.

However, it is important to understand that under ISO 17025 the use of certified reference material to show that measurements are acceptably accurate is not substitute for traceable calibration of instrumentation.

9.4.3       Understanding the hierarchy of reference materials

A reference material is any material or substance where one or more property values are sufficiently homogenous and well accepted that they can be used for checking methods or apparatus. Two key types of reference materials are (a) single compounds or items of established purity or properties and (b) matrix references which are specific types of samples where accepted values of one or more determinands have been established.

The key to the reference materials is in the acceptance. The highest level of acceptance is a certified reference material (CRM) but even this term is somewhat variable in meaning. Strictly speaking, the only certified reference material of impeccable pedigree is one complying with the definition of a CRM is ISO Guide 30, which means one produced according to ISO Guide 35 by an organization complying with ISO Guide 34 and where the certificate complies with ISO Guide 31. The problem at present is that there is no accreditation system against these ISO Guide, which leaves the purchaser of reference materials with the responsibility of assessing compliance directly.

What happens in practice is that suppliers of commercial reference materials make an evaluation, and purchasers rely on credibility of the supplier and on the content of the certificates provided to give confidence that the values quoted for the reference materials are reliable. Whether the reference material comes with enough information to enable it to be classed as a certified reference material (CRM) or only as a reference material (RM) is a matter of interpretation. The basis of any supplier’s interpretation of the terms can normally be found in their catalogue. Note that in the US the terms NIST Reference Material or Standard Reference Material (SRM) are generally regarded as equivalent to CRM.

In practice, a reference material obtained from reliable organization, such as NIST, EU Community Bureau of Reference or the United Kingdom Laboratory of the Government Chemist (LGC), will be very widely recognized and can be regarded as a reliable basis for checking the accuracy of methods.

9.5  Some other calibration issues.

9.5.1       In–house Calibration

It is acceptable for testing laboratories to be self–sufficient in calibration and not to use any external calibration services for their equipment. They will still, of course, need to interact with the international metrology system to achieve traceability. This will usually mean holding traceable calibrated references to be used as the basis of the in–house calibration system.

For example, a testing laboratory might well hold a master reference thermometer which has calibration certificate issued by an accredited calibration laboratory. The laboratory could then use this to calibrate all its working thermometers according to a documented procedure. The body assessing the laboratory as a testing facility would include in its assessment an evaluation of the internal calibration system for thermometers and would need to be satisfied on the following points:

a.     That the reference thermometers were acceptably and traceably calibrated by a body recognized by the assessment body as providing adequate calibration.

b.     That a sound and documented procedure was in place for carrying out calibration of working thermometers and the adequate quality controls were applied.

c.      That staff carrying out the internal calibrations were properly trained and seen to be competent.

d.     That there were auditable records showing that all these criteria were being met on a routine basis.

Note, too, that the internal calibration would also have to be subject to an evaluation of its uncertainty by the laboratory just as though it were carried out by an external accredited calibration service.

In practice, laboratories typically carry out some calibrations in–house; thermometers, spectrometers, and simple equipment, such as pH meters and conductivity meters, are examples. However, more complex calibrations, and especially those requiring expensive references which are themselves expensive to recalibrate, are typically carried out by outside services. For example, balance calibration is quite complex, needs only to be done fully once a year and needs expensive weights which have themselves to be calibrated annually. It is, therefore, more economical in most situations to use a commercial calibration service.

Another point to consider is the question of adjustment of equipment found out of calibration. This can be a skill and may well be outside the capability of the laboratory. Modern balances are a case in point. In these circumstances, the use of a service which calibrates and can, if necessary, service and adjust is likely to be an attractive option.

9.5.2       Intervals between calibrations

Each laboratory must have a policy for determining calibration intervals, the object being to ensure that re–calibration takes place before the previous calibration has deteriorated to the point where the validity of the measurement is called into question.

To some extent, routine quality control checks will provide information which could reveal instruments drifting out of calibration. However, wherever practicable, the laboratory should institute a regime for verification of calibration of equipment between formal calibration intervals to detect unexpected drift or malfunction. This is particularly important for key items such as balances where a loss in calibration would have a far-reaching consequence.

Where it is not possible to carry out any simple routine verification to confirm that calibration is maintained, the laboratory will have to rely on setting specific calibration intervals. In the first instance, the laboratory sets an initial calibration interval based on the manufacturer’s recommendations, the heaviness of use of the instrument, the accuracy required, the perceived risk of a loss of calibration and the magnitude of the impact, and local experience of similar instruments. The calibration is checked at the end of this interval and, if it is still correct, the interval is confirmed as adequate. Alternatively, the interval is reduced by 50% if the check shows that re–calibration is required.

This process is continued until an appropriate and adequate interval is arrived at. Records must be kept so that, if necessary, the laboratory can justify the interval chosen.

The process may, conversely, provide evidence of the stability of equipment and hence a justification for reducing the frequency of calibration relative to the original estimate.

Whatever the strategy for deciding calibration intervals, it must be consistent and documented. It should also consider that it is not unusual for accreditation bodies to insist on minimum calibration intervals for some equipment, for example balances, which are typically calibrated annually. In some cases, the accreditation body is prepared to consider relaxations to longer calibration intervals, but the laboratory would have to provide convincing evidence that data validity is not being compromised by extending the interval.

It should also be borne in mind that some test method, for example construction materials testing, include requirements for verification of equipment calibration at specific intervals. These intervals must be complied with, or the test method will be invalid.

9.5.3       Relaxation of calibration requirements

Some relaxation on the traceability of calibrations is apparently permitted in ISO 17025 by paragraph 5.6.2.2.1, but only with testing laboratories. If it can be established that calibration uncertainty “contributes little to the total uncertainty of the test result,” then traceable calibration is not required and other methods of demonstrating confidence in the equipment may be adopted.

In practice, this is not really a dispensation at all since there is still a requirement to show that the equipment provides the necessary uncertainty of measurement, and it is difficult to see how this can be done without at least some calibration.

In any case, accreditation bodies tend to take a narrow view of the traceability of calibrations, and it is not recommended that this clause be relied upon as an easy excuse to avoid traceable calibration.


10. Administration of work and sample tracking

10.1                   Key Questions

o   Do you have procedure for logging samples into your laboratory?

o   Are all samples uniquely numbered as soon as practicable after receipt?

o   Does your system ensure that samples are stored securely and in a way that will preserve them against changes which may affect data generated form them?

o   Do you have a system for ensuring that the procedures required to be carried out on samples are clearly communicated to laboratory staff?

o   Do you have a procedure to ensure reports are checked to make sure they correspond to the raw data?

10.2                   Receipt of samples or calibration items

Before any work is begun, the contract review process described in Section 7.3 must be complete. There must also be a check to confirm that any test or calibration items are appropriate for the procedure and in suitable condition.

The documentation on sample/item receipt should specify which staff are authorized to receive and record items. The usual practice is to keep a receipt register. The information recorded should include details of the condition of items on receipts and should identify the person making the register entry.

The “condition” should cover any parameters which may affect the results, for example temperature, whether bottles are full or partially full, and any information on breakages or damage, spillages, leaks, or lost labels.

The person receiving the items should also be responsible for examining them to ensure that they are suitable for the intended test or calibration. If there are any problems, action must be taken to ensure that now work is done before the problems are resolved with the client. A record must be kept of any communications with the client, as described in section 7.3, since such communications involve amendment to the contract review.

It should also be emphasized in the quality documentation that no work must be done until all matter of concern has been brought to the client’s attention and clarified to mutual satisfaction, i.e., the contract review is complete. These may include problems with the items themselves or any lack of clarity about the work required.

10.3                   Identification and Storage of Items

The laboratory should have a clearly documented policy on where items are to be stored. This may involve several storage locations for different types of items, but these should be clearly specified. Each storage location should have a book or record form in which items can be signed out and back in again.

A particular type of item will be found in the storage location or can be located by reference to the book. The record in the book should identify the person taking the item and the date and time of removal. Similarly, return date and time should be recorded, if appropriate. The object is to create a complete history of the custody of the item.

All items must have a unique identifier which stays with them throughout their time in the laboratory. Although the standard does not actually require sample numbers, it is generally difficult to persuade assessors that, for example, using the client’s sample description as an identifier provides unique identification. Bear in mind that the “uniqueness” must be retained over the period for which the laboratory retains its records, so a system over which the laboratory does not have direct control is immediately suspect as possibly leading to duplication.

For the same reason, it is not adequate to retain a numbering system which repeats cyclically. It is not uncommon for laboratories to set sample number to unity at the start of a calendar year since they only expect to keep samples for a very short period. The records, however, will have much longer currency, which must be allowed for in the numbering system.

The numbering method can be chosen to suit the requirements of the laboratory, but it must be unambiguous. The laboratory sample number must be related to any client identification details. The simplest way to do this is to record the laboratory number and the client’s identification in the sample receipt records.

Both the laboratory number and the client’s identification details will have to be included in any report sent out – see section 13.

10.4                   Internal information transfer

There must be clearly documented procedure which shows exactly how the client’s requirements are passed to the person who will do the test or calibration work. This should normally be done in writing so that a traceable record is kept. This requirement is met, for example, by the person allocating work completing a worksheet and passing it to the laboratory staff.

Once the work is complete, there must be a documented procedure showing how the data is checked and transcribed to the final report. It should be clear who is responsible for deciding whether the quality control criteria have been met and who can release the data.

There is no problem with having check performed at different levels in the management/technical structure, but the system must be clear, and the quality control information must be recorded explicitly. A system where individual staff are given the initial responsibility of verifying that the data meets the quality control criteria is perfectly consistently available to all who carry out the checks. This may be achieved, for example, by including details of the acceptance criteria in the methods documentation. However, the responsibility for the final release of data to clients must reside with clearly identified individuals.

10.5                   Compilation of reports

The procedure for compiling reports must be clearly documented, showing exactly which staff is responsible for putting reports together and authorizing their release. There must be a documented requirement that all reports are checked against the raw data and the client’s instructions before being issued. This should be done by the person authorizing release of the report sine they take responsibility for its contents. All checks should be recorded. Normally, this is achieved by the person carrying out the check signing and dating the documents involved.

10.6                   Retention and Disposal of Items

This is only an issue for testing laboratories since items sent for calibration are invariably returned to the client. In this case, the laboratory’s obligation is to ensure that items are properly packed and transported to maintain the integrity of the calibration.

Testing laboratories should have a clear policy on how long samples are kept. Wherever possible, samples should be retained for a period after the report is issued in case there are any queries which might be resolved by re–test. In the case of some determinands, of course, this is impractical in any case so the laboratory should reserve the right to dispose of samples immediately if their retention makes no technical sense.

ISO 17025 specifies no minimum period for sample retention but one month after report issue is widely used where it is technically meaningful to re–test sample. Clients should be made aware, for example through the laboratory’s standard terms of business, what the sample retention policy is so that there no misunderstandings.

It should be clearly documented who may authorize sample disposal and all disposals should be recorded.


11. Recording of results and associated data

11.1                   Key questions

o   Does your system ensure that all observations are recorded at the time when they are made?

o   In this record, is the raw data always preserved so that any problems can be investigated?

o   Do you have a policy on how amendments to entries on laboratory records are to be made?

o   Does the system allow the person making any record to be identified?

11.2                   General principles of data recording

No set format for the recording of results is specified in the standard. Laboratories may adopt where system suites their needs. The two usual options are the use of worksheets or the use of laboratory notebooks, either personal or method specific. In some instances, both systems can operate together.

Laboratories are also increasingly entering data directly into computer systems. The control of such systems is dealt with explicitly in Section 12.

The overall requirements from quality point of view are that data must be recorded at the time of observations and in such a way that there is a complete audit trail so that errors can be traced, and work can be repeated in a manner as near to the original as possible. It must be possible to trace a result to the person who made the measurement and the equipment used and to identify precisely the method used. This means that the audit loop can be closed, making it possible to check that the work was done by a trained member of staff, using appropriate methods, on correctly functioning and calibrated equipment.

The record must be complete so there should be no use of scraps of papers. All data, calculations and observations must go on worksheets or in notebooks in non–water soluble ink or ball point pen. Pencils and water soluble felt tip pens should be banned from the laboratory.

Corrections to notebooks and worksheets must be made in such a way that the original version can be read. The approved method is to cross out the original with a single line and write in the corrected version as near to it as possible. Corrections should be initialled and should, ideally, carry a note explaining the reason for the correction. Correction fluids should be banned from the laboratory completely.

11.3                   Worksheets

Worksheets are the best strategy for the routine laboratory. There should be a worksheet for each test or group of related tests. The worksheet should not only prove space for recording results but should also require all calculations to be done on it. In fact, wherever possible, the calculations should be laid out in outline ready for the variables and the results to be written in.

This should include all calculations, even those required to calculate dilution of standards and weights derived by difference. The object is to have as much information as possible to support quality assurance and to provide for error tracking. A well-designed worksheet should also reduce the likelihood of errors.

All entries must be initialed, and all calculations should, wherever possible, be checked and initialed by the checker. The checking of all calculations is sometimes resisted by laboratories on the grounds that it duplicates work. ISO 17025 is specific in requiring calculations to be checked although it does not actually say that this must be by a second person. Whatever policy is adopted, the onus of proof will be on the laboratory to demonstrate that there is not a substantial error rate in calculations. One possible strategy is to check a sample percentage of calculations and, hopefully, as a result to build up a body of evidence that errors are not being made. However, not all assessors will find this acceptable.

Finally, the worksheets should have space for signature by whoever is responsible for final quality control checks and a space to indicate pass or fail. This effectively releases data for inclusion in reports.

11.4                   Notebooks

If notebooks are used, they must be properly controlled they constitute the raw data record belonging to the laboratory.

Each book should be numbered. The Laboratory Manager should keep a record of the holder or use of each book. The books should have numbered pages so that a record can be referenced by notebook number and page and so that page cannot be torn out without being detected.

Staff should not be allowed to remove notebook from the laboratory except for field work, and full books should be returned to management for archiving before a new one is issued.

If the notebook system operated in conjunction with worksheets, then the worksheets should have a section for recording cross references to notebooks: book number and page number.

Notebooks in isolation are not the ideal solution for use in a routine testing laboratory but can be as useful adjunct to worksheets.

One problem which can arise with notebooks and for that matter, with worksheets is that staff, anxious to be neat and tidy, record data in rough and then copy it over later. Strictly speaking, this practice is not a non–conformance as such, provided the rough notes, which now constitute the raw data, are retained as part of the laboratory record. It would also be necessary to have checks on the accuracy of the transfer of the data from the rough notes.

It is preferable, however, for a laboratory to forbid this type of practice and to insist upon all data being recorded directly on worksheets or in official notebooks. This ensures consistency in practice and leaves no ambiguity about what constitutes the raw data. It also eliminates a data transfer step, which is a potential source of error.

11.5                   Other types of data

Many instruments now produce printed output. This should always be retained as part of the raw data archive. Ideally, it should be attached to worksheets but if this is impracticable it should be filed in a systematic manner so that it can be readily retrieved.

The printout must show the sample number and the operator of the instrument. Ideally, the operator should initial it to prevent misrepresentation.

Any other relevant paperwork produced should be traceable in the same way and must be marked with the sample number and the name of the person generating it.

Where instruments record data in computer files, these should, preferably, have provision for recording the operator and the sample number to which the file refers. If this is not accommodated, it will be necessary to institute a record book which is filled in with the sample number, the operator, and the computer file identifier. Alternatively, the same information might be accommodated on worksheets or in notebooks.

All computer files will need to be secured by appropriate backup regimes. See section 12.9 for details.


12.  Computer Systems

12.1                   Key Questions

o   Do you have control of what software may be loaded onto any of your computers?

o   Are all computer systems, including software, checked to ensure that they record and process data correctly?

o   Is all software secured against unauthorized changes?

o   Do you have records showing when software was updated and that checks were made on its correct functioning after update?

o   Do you have a procedure for regular backup of data held on computers?

12.2                   General Issues

Computers and associated software can obviously affect data validity and so need to be properly managed and controlled, whether they be part of instrumentation or systems simply used to store and process data.

All computers must be introduced into use through a controlled system and must be subject to checks for correct functioning before being placed in routine use. This applies to all hardware and software and especially to software written in–house or applications developed on spreadsheets, for example. There should be records of the checks used to ensure correct functioning, as with any other piece of equipment.

12.3                   Control of Software

There should be a defined person who is responsible for authorizing any software to be used in the laboratory. This person must ensure that it is checked to show that it does not corrupt data or other information before it is released for use. This requirement must not apply only to new software but also to any updates or modifications. The responsible person should be the Laboratory Manager or someone to whom the responsibility is delegated by the Laboratory Manager.

There is no reason why staff cannot set up spreadsheets, for example to carry out routine calculations and data processing, but they must have these checked and authorized before use. It is not acceptable to have staff setting up ad hoc applications and using them without their being accepted by the Laboratory management as suitable for their purpose. One issue which must be addressed, however, is the question of whether spreadsheets and similar software which can contain set–up calculations may be corrupted and so lead to wrong results. Wherever possible, spreadsheets must be protected from alterations by using passwords reserved to the management. Where this is not possible, a set of sample data must be available, which can be loaded before the spreadsheet is used, to check that the calculated values are determined correctly.

To ensure that the system for controlling software is effective and can be audited, each computer should have a log which shows the hardware and software installed. Any new software or modifications to existing software, including new releases of commercial packages, should be recorded in the log with the date when they came into use. It is the possible to determine which version of the software was in use at any time, should an error need to be traced. There should be regular audits of the actual software installed against the log and any authorized software should be removed.

It must also be possible to recreate the previous versions of any software in case an error or query arises, and it is necessary to determine whether the software was responsible. The simplest way to provide this backtracking facility is to ensure that, at each update, a copy of the previous version is retained on a removable medium such as tape or disc. Some larger commercial packages have a built–in facility to backtrack and this can be used, if available.

12.4                   Computer Networks

Increasingly, laboratories use local area computer networks, especially if they are operating Laboratory Information Management Systems (LIMS). Such networks can make control of software easier since work areas can be established with restricted access and often with different levels of access. These networks should be exploited to control, for example, who can write new files to the network server. In general, the network should be operated as described above. For stand-alone computers, there should be a log of the software installed, and updates and new installations should be controlled as described.

Care will need to be taken where the workstation machines also have local disc drives. The laboratory will need to have a policy on whether local software will be permitted. If the local software is in use, then each workstation will need its own log in addition to the network log. If local software is not authorized, the audits should check if any unauthorized material has been loaded.

12.5                   Computer systems managed by other departments.

It often happens that the laboratory management does not have direct control of the network servers. These may be part of a general company network, for example, and so under the control of a computer section. The laboratory management will need to demonstrate assessors that it knows what is being done by the network management on its behalf and that is informed of any software updates so that it can carry out checks on correct functioning.

The most convincing way for the laboratory to demonstrate it control is to have written agreement between the laboratory and the network management which specifies the division of responsibilities. This should cover at least the following points:

a.     The network management must agree to consult the laboratory management and obtain its agreement to make new software accessible to laboratory staff. If necessary, the laboratory management must reserve the right to carry out checks on the software before accepting it.

b.     The network management must agree to inform the laboratory if it intends to update or otherwise amend any software used by the laboratory. It should be clear what checks the network management will carry out on the laboratory’s behalf and what checks will be carried out by the laboratory before the software revision is released. The key is to clearly define the responsibilities of each party to avoid inadequate checks when both parties suppose the other to be responsible.

c.      The responsibility for providing the ability the backtrack to the previous version of the software should be clearly agreed.

d.     The responsibility for keeping the log of software and updates should be agreed.

e.     There should be a clear mode and level of communication between the two parties. In particular, the network management must know who is authorized to request software changes on behalf of the laboratory and must not respond to requests from unauthorized members of the laboratory staff.

f.      The arrangement for backing up the laboratory’s data on the network must be agreed. See section 12.9 for general guidance on backing up.

12.6                   Data integrity on computers

Where laboratory handles data on computers, there are considerations, mainly of data security and control of data alterations. From time to time here will be a need, for perfectly legitimate reasons, for a laboratory to alter data which has been recorded. Entries may, for example, have been erroneously or quality control results may have indicated a need for a repeat measurement which then produces a different result.

The first thing that the laboratory should decide is whether the computer record constitutes the raw date, i.e., the data recorded at the time of the observation. This will only be the case where the data is logged into the computer directly from instruments or is entered at the bench. If data is recorded in notebooks or on worksheets prior to transfer to the computer, then these paper records are the raw data. Raw data must be preserved as part of the laboratory record. If there is manual transfer to the computer, then the laboratory will have to convince assessors that there are adequate safeguards to check that this is done correctly and that, once the transfer is completed and checked, any alterations to data are under control and in compliance with the requirements of the standard.

Manual entry of data into computers is clearly a potential source of error. Ideally, all such transfers should be checked by a second person. In critical cases, double entry of data can be practiced, where the data is entered twice to create two files which can then be compared by appropriate software. The onus will be on the laboratory to demonstrate to assessors that it takes reasonable steps to check that data is entered correctly. At the very least, the laboratory should check a proportion of the data entered and, hopefully, build up a body of information that demonstrates that significant errors are not being made.

A more fundamental problem arises with computers when it is possible to alter data without leaving a record of the alteration or of the original entry. This contravenes basic requirements that all alterations must be traceable to the person carrying them out and must be made in such a way that the original value is retrievable. Software specifically written for laboratory purposes usually incorporates an audit trail which logs alterations and the identity of the person making them through their username. Such software will normally also retain a record of the original entry and may even require entry of the reason for making the change. Where there is an audit trail of this type, the laboratory will meet the requirements of the standard provided there is a mechanism for monitoring the audit trail as part of the quality checking procedure.

Software not designed specifically for laboratories is unlikely to contain this audit trail facility and so other steps may need to be taken. The simplest option is to make a rule that all alterations must be recorded in a paper log. This should show the identity of the person making the alteration, the date of alteration and the old and new values. Ideally, there should also be facilities to record the reason for the change. Such a system, although simple, is not likely to be easy to present as compliant with the standard since it is, in most situations, not auditable. If someone make an alteration to raw data in a computer and fails to fill in the log, then the failure cannot be detected. On the other hand, if the raw data exists on a work sheet, there will either be a discrepancy between the work sheet and the computer record, or the work sheet will show an alteration to the new value. In this case, the system is auditable and acceptable. Note here the critical need to be clear about what constitutes the raw data.

If data in the computer can be protected by the software from alteration, for example by being given read–only status when entered, then the use of a paper log of amendments is must more secure. The laboratory should establish clear rules on who can authorize amendments to the data, and they alone should be able to lift the read–only status of the data and make the amendment. The authorized person is responsible for filling in the log. Provided the authorization for alteration of data is at suitably senior level, the assessors are likely to be satisfied. An appropriate level would be at least senior technical staff.

The discussion so far has been confined to raw data in computers, but the status of data changes as it moves through the laboratory and the degree of protection must be greater at each step. Raw data is first submitted to quality control scrutiny and, provided it passes, is then available for incorporation into reports. Up to this point, alterations, subject to the requirements for an alteration record to be generated in some form, may be permitted to be carried out freely. However, once data has passed the quality control fence, only the most senior staff, normally the Laboratory Manager and designated senior professionals, should be able to authorize changes. This implies that data in computers must be protected from unauthorized alteration, either by being made read–only or by transfer to computers physically accessible only to authorized persons. A laboratory cannot comply with the requirements of ISO 17025 if quality-controlled data can be altered freely by any member of staff, especially if no automatic audit trail is generated. It is also important to remember that the need to alter data after it has passed quality control automatically creates a situation where quality incident, non–conforming work, has occurred and corrective action will be required.

The situation escalates further once data has been released as a report. There would be a serious non–conformance if the laboratory’s record of data failed to reflect the report content. Specific procedures must be followed when reports must be amended, and the original and amended data must both be available – see Section 13.6. Once data has been reported, amendments to the laboratory’s archive must only be allowed at the highest level, by the Quality Manager and senior technical management, and so data on a computer must be totally protected from change by any other staff. It must also be impossible to generate a new version of the original report with altered data without meeting the requirements of the standard for report amendment – see Section 13.6. The need for report amendments automatically requires a quality incident record and corrective actions.

Beware of falling into following common trap: Data has been altered on a computer but not on the corresponding work sheet. If there is no record or audit trail for the alteration on the computer, a serious non– compliance arises since it is impossible to tell which data is valid. An initialled alteration on the work sheet to bring it into line with the computer value would have solved the problem.

12.7                   Computers which are a part of instrumentation

Computers which control or collect data from instrumentation should be treated as a part of the instrument and be checked as part of the operation of the overall system. Software changes and upgrades should be treated in the same manner as any other modification to the instrument and must be checked for satisfactory functioning before their release for routine use.

Care should be taken when instruments are serviced by suppliers since service engineers not infrequently load software patches and modifications. The laboratory management must make it clear to the service agency that it needs to be informed of any software changes so that they can be checked and recorded in the instrument log.

Computers attached to instruments can provide an excellent means of retaining raw data in a compact form. For example, chromatography data can be a problem to store as hard copy but archives on tape or compact disc are highly compact. However, secure backup regimes are essential to protect the data – see Section 12.9.

12.8                   Some considerations on Laboratory Information Management Systems  (LIMS)

LIM systems are becoming increasingly common and can assist greatly in the operation of a quality management system. It is even possible to obtain systems which incorporate records of calibration intervals and training review in a manner that will prevent the entry of data generated on instruments past their calibration date or by staff whose training review is overdue.

Some points about LIM systems are worth noting, however, since they sometimes introduce the possibility of quality anomalies and may not be providing exactly what the laboratory management supposes they are.

a.     Most LIM systems automatically stamp data entries with the identity of the person entering the data. The information is derived from the person’s computer username when they log on to the system. In this case, it is essential that the laboratory enforces a rule that staff must only use terminals which they have logged into under their own name. They must not just use any terminal which happens to be logged on and available or the audit trail is destroyed. There should be an absolute rule that no terminals are left logged on and unattended; in any case, this has security implications. In most systems it is possible to set terminals to log off automatically after a period of inactivity. This should be exploited.

b.     It is not unusual for LIMS to only identify the person entering data and to designate them as analyst. In many practical laboratory situations, data may be entered by someone other than the actual analyst. The laboratory needs to be clear about how to achieve an audit trail which identifies the person carrying out the analysis. If analysts always enter their own data, then the LIMS can provide the trail very simply. If this is not the case, the laboratory must recognize that it is the person entering the data that the LIMS actually identifies and must have some other means of providing the audit trail to the analyst, for example, by initialling work sheets. In many instances, it is simply a matter of making the position clear and communicating it properly, especially to assessors. Increasingly, LIMS permit separate identification of the analyst and the person entering the data. This provides maximum flexibility and should be made available whenever possible.

12.9                   Backup of data on computers

This is obviously a critical area. It is also subject to the strange human perception of computers which leads to mildly illogical stances where one copy of data on paper is generally regarded as perfectly acceptable but once data is on machine readable media backup copies are required. The lack of logic is not, I hasten to add, in requiring backup of computers but in being happy with single paper copies!

A laboratory is under an obligation to ensure that it protects any data which it holds, especially if this is a raw data or an essential part of the audit trail. Where a laboratory has raw data in computer, the backup regime must be extremely rigorous.

The policy for backup should be such that, should there be a system failure where data not yet backed up irrevocably lost, the laboratory would be able to recover it, either from worksheets or by repeating the tests. In the extreme situation where a laboratory is carrying out work which, by its nature, cannot be repeated and where the only copy of the data is on computer, backup will need to be very frequent. A common strategy is for laboratories with LIMS to have dual disks drives on the main computer or network server, or even dual servers, and for all data to be recorded directly on both. This is further supplemented by regular tape or optical disc backups, sometimes twice a day.

Most laboratories are not so exposed, and a daily backup will be sufficient; a good rule of thumb is to ensure that all data is backed up at the end of each working day and transferred to secure storage. This storage should be separated from the laboratory area by a fire break and, ideally, be provided with a fireproof safe designed for data storage. Fireproof safes designed for documents are not suitable since they reach too high a temperature to protect magnetic media.

If data must be kept in the laboratory, then a data safe is essential, and its use must be rigorously enforced.

When data is removed from online access, for example to regenerate capacity on the working computers, then two copies should be made, and one stored off–site or as far from the laboratory as possible. The laboratory will need to be in a proposition to retrieve the archived data with reasonable ease, so it is advisable to ensure that some index of the archive is generate. Good LIMS systems will provide this automatically and will retain the index as part of the online system, enabling rapid identification of the location of the archived data for a particular sample.

The archive or backup medium can be any medium is convenient. Tape or floppy discs are perfectly acceptable but the increasing convenience of optical storage, for example writable CD and DVD drives, is now proving popular and such facility should certainly be seriously considered as part of an LIMS. 


13.  Reporting Requirements

13.1                   Key Questions

o   Do you have a defined report format? 

o   Does this comply with the detailed requirements of ISO 17025?

o   Does your system ensure reports always reach only those entitled to receive them?

o   Does the report acknowledge subcontracted work?

o   Do you have a policy on how to deal with situations where it becomes apparent that suspect data has been reported.

o   Do you have a clear identification of staff authorized to approve reports? 

13.2                   Report Format

ISO 17025 is quite specific about what must appear on reports from the laboratory but also has a general requirement that the information must be reported accurately, clearly, unambiguously, and objectively. It must also be in accordance with any specific instructions in standard test or calibration methods specifications.

There is a relaxation of this if a laboratory is reporting within its own organization or in the case of a specific written agreement with a client. In these circumstances, the report may be abbreviated but, nonetheless, all the information required by the normal report format must still be available within the laboratory. Normally, the written agreement with clients to report in an abbreviated format should be concluded as part of contract review – see Section 7.3.

Requirements for test and calibration reports differ slightly but the following are common factors:

1.     A Title: Test Report, Calibration Certificate, Test Certificate and Calibration Report are all mentioned as appropriate terms in the standard but there is no compulsion to use these specific terms.

2.     The name and address of the laboratory and the name and address of the client if the test or calibration was carried out on site the location of the actual work must also appear.

3.     A serial number or similar unique identification for the report which should appear on each page together with pagination in the form (page x of x) an alternative way of indicating the total number of pages is also permitted provided it allows the user to be certain about whether they have a complete document.

4.     The client’s identification details for the samples or calibration items and the identifiers used by the laboratory, for example sample numbers. There should also be a brief description of the items and a note of their condition.

5.     The date of receipt of the items and the date(s) of testing or calibration. Strictly speaking, the date of receipt is only required when it is critical to the validity of the results, for example if testing needs to be done within a certain time of sampling or where samples or calibration items require conditioning before being worked on. In practice, there are few occasions where the date of receipt has no relevance, so it is recommended that it be incorporate routinely.

6.     Identification of the method used and any sampling plan or method which is relevant to the data. These may be direct reference to a standard specification or may refer to a documented in–house method. In the latter case, a brief outline of the procedure, including any sampling methods, should be included. In practice, many accreditation bodies permit a generic statement indicating that the laboratory’s standard procedures were used and offering the client a reference list on request.

7.     A note of any deviations from a standard method and any environmental conditions which may bear upon the results.

8.     The test or calibration results themselves with units.

9.     The name, position and signature or other identification of the person accepting responsibility for the report and the report’s date of issue; it is recommended that the person accepting responsibility initials each page of the report where this is practicable. It is up to the laboratory to decide who is identified as having authority to release data, i.e., sign reports. The onus will be to satisfy assessors that the person(s) so authorized have the appropriate skills to evaluate the data.

10.  Where relevant, a statement that the results only apply to the items tested or calibrated. This is generally required in the case of product testing to prevent the results being applied inappropriately in support of general product or batch certification.

11.  Preferably, a statement that the report shall not be reproduced, except in full, without the written permission of the laboratory.

In addition to these general requirements, there are the following specific requirements for test reports:

1.     Details of any deviations from the standard test method and information on any relevant factors such as environmental conditions.

2.     Where relevant, a statement on compliance/non–compliance with requirements or specifications. The key test of relevance here is whether the client requires the information. This will depend on the question asked by the client. If the request is to test for compliance with a specific requirement, then there is clear relevance. If the request is simply for data, then the statement does not have to be volunteered.

3.     Where applicable, a statement of the uncertainty of the result; this is generally regarded as applicable were requested by the client, when it is relevant to the application of the test results and when uncertainty affects compliance with a specification. It is hard to see a situation where uncertainty does not affect compliance with a specification, but most accreditation bodies are currently not taking such a rigorous view. Note that even if uncertainty is not required in the test report, the laboratory is still required to have estimated it. This is a completely independent requirement.

4.     Where appropriate and needed, opinions and interpretations. This is a developing area as ISO 17025 is the first standard for laboratory systems which covers requirements for opinions and interpretations. Previous practice was to exclude them from the standard and hence from the scope of accreditation. A disclaimer was then required when reports contained opinions and interpretations. Many accreditation bodies are still operating in this way as an interim measure while they develop procedures for assessing laboratory systems for managing quality of opinions and interpretations. The “when needed” requirement is, however, regarded as being relevant to situations where the opinion or interpretation effectively constitutes the result. Examples include forensic investigations where the question posed may be a goodness of fit and this based on various test results from which a conclusion needs to be drawn. Whether an opinion or interpretation is required is an ideal issue to be clarified at contract review.

5.     Any other specific information which may be specified by the method or requested by clients.

6.     Reference to the method of sampling, the items samples, the date of sampling, any relevant environmental conditions, and other relevant factors when the laboratory undertakes sampling, and an understanding is relevant to interpretation of the results.

The additional requirements for calibration reports are as follows:

1.     A record of any environmental conditions which have an influence on the measurement results. This normally means temperature, possibly pressure and sometimes humidity.

2.     The uncertainty of measurement. Not that this is compulsory in calibration reports. The only exception is when a report is made about compliance or non–compliance with an identified meteorological specification. In this case, the uncertainty must be considered when determining the compliance, so it is not explicitly needed in the report. However, the laboratory must still have a record of the uncertainty and the data used to determine compliance.

3.     Evidence of traceability. This normally requires details of the references used and cross reference to their calibrations.

4.     Where an instrument has been adjusted, the results before and after adjustment. This allows the client to estimate drift and perhaps to review calibration intervals.

5.     Unless there is a legal requirement, calibration reports should not carry any recommendations on a re–calibration interval or date since this might be taken to imply that the calibration has a specific time validity. Clearly the laboratory cannot offer any such guarantee once the item calibrated leaves its custody. This prohibition is relaxed if the client asks for a recommendation, but the laboratory should make it clear at contract review and in the wording of the report that this is only a recommendation and that no guarantees are being offered that the calibration will remain valid for the stated period.

Other factors to be borne in mind when reporting is the inclusion of sub–contracted data in reports and the question of reports which contain results for work not included in the laboratory’s scope of accreditation. The general principle enforced by accreditation bodies is that these results must be identified clearly and that the laboratory must not seek to misrepresent its scope of accreditation or to represent sub–contracted data as having been generated in–house. In the case of calibration laboratories, there is an absolute requirement to supply the client with the report issued by a subcontractor, but this not required from testing laboratories. See Section 15 for further discussion of sub–contracting.

ISO 17025 has nothing to say about reporting tests and calibrations not covered by the scope of accreditation since it does not address the issue of limited scope of accreditation at all. Perversely, the standard is written on the assumption that everything which the laboratory does is within the scope. This is rarely the case. The consequences of this deficiency in the standard are generally dealt with by regulations formulated by the accreditation body to prevent misrepresentation of the scope and hence the areas of activity which they underwrite.

Some accreditation bodies require that a minimum proportion of the results, for example 50%, are within the scope of accreditation before their logo or reference to accreditation can be used. More commonly, a disclaimer, with clear marking of non–scope data is permitted. However, if none of the reported data is within the scope of accreditation, the logo of the accreditation body and/or reference to accreditation may never be used on the report.

13.3                   Some further comments on opinions and interpretations

Laboratories are required to be able to show that they have a documented basis on which professional judgements are made and that the qualifications and experience of those making them are appropriate.

Appropriate information would be references to any general requirements, standards, technical requirements, or contractual specifications which are being used as a basis for the judgement. In the case of judgements based on individual’s professional experience, assessors will need to be satisfied, based on the staff records, that the person making the judgement is appropriately qualified. Wherever possible, laboratories should have guidelines for any routine interpretations and judgements which must be made to ensure that they are made consistently over time and by different individuals.

13.4                   Authorization of Reports

Authorization of reports can be a problem area. The ideal situation is that all reports are signed documents. In practice, commercial laboratories often must give data over the telephone or by fax. The status of faxed documents, even if signed, varies in different legal systems as does that of emails. The wisest strategy is to make it clear to clients, and in the laboratory’s quality documentation, that the only definitive report is the signed original transmitted in hard copy. All other transmissions, fax, electronic or verbal, are subject to confirmation. This should not, however, be a substitute for ensuring that data is only communicated after proper release. The laboratory should have a clear policy, for instance, on which staff are authorized to give results over the telephone, and this should only be permitted once the data has undergone all quality checks and is ready for inclusion in a formal report.

Reports generated by computer are another area where careful control is required. Increasingly, laboratories produce large numbers of reports directly from laboratory computer systems and often in quantities where signing each report is totally impractical. ISO 17025 permits authorization by means other than a signature, but the report must sill identify an individual who take responsibility for the data and his or her position.

In the case of reports generated from a computer, it is essential to have security such that only authorized persons can generate reports. It is also essential to ensure that, once a report has been generated, it is not possible for an unauthorized person to alter the data in the computer and then to generate a changed version of the report. The security must be such that the rules for report amendments discussed in section 13.6 are observed, which effectively means that once a report is issued to a client the laboratory’s copy, even if a computer file, must never be altered. Amendment requires that a completely new report is issued. The security can be achieved physically but assessors will normally expect to see appropriate software protection. Section 12 deals specifically with the use of computers and ways of providing appropriate safeguards.

The format and content of reports will come under very close scrutiny by accreditation bodies. It is the report, after all, which carries the accreditation body’s log and, hence, its stamp of approval. It is in the interests of neither the accreditation body nor the accredited laboratories that this approval appears on an inadequate document.

13.5                   Transmission of Reports

Normally, reports will be sent by post to the client’s address. Where other means, for example, fax, telephone, or electronic transfers are used, there must be a documented policy to preserve confidentiality.

This should normally specify that the client must agree, in writing, to the transmission and, where the telephone is used, that results should only be given to an individual who can be recognized by the laboratory as entitled to the results. It is good practice for the laboratory to require the client to confirm their own sample identifiers or some other information related to the samples before being given the results over the telephone to establish bona fides. Telephoned results should always be confirmed in writing.

If the client requires the results to go to an address other than their normal address, or data is given to an alternative telephone, fax or email location, the laboratory should ask for confirmation of this requirement in writing.

If the client requires the results to go to an address other than their normal address, or data to be given to an alternative telephone, fax or email location, the laboratory should ask for confirmation of this requirement in writing.

13.6                   Amendment of reports

The rules of amendment of reports, once issued to the client, are quite specific. The original report cannot be destroyed and expunged from the system to be replaced by an extended or corrected version. A completely new report must be issued which complies with all the normal reporting requirements, and it must be endorsed to show that it is an amendment, supplement, or complete replacement of the previous version. The laboratory must retain copies of the original and the amended versions as part of its archive. Care is needed when report archives are computerized to ensure that the new version does not overwrite and obliterate the original.

A need to issue an amended report will imply a prima facie quality failure – non–conforming work – and will need investigation and corrective action.

13.7                   Response to reporting of suspect data

When the laboratory discovers a case where suspect data has been released, for example if it emerges that an instrument is found to be out of calibration at a regular check and it is uncertain when it went out of calibration, the general requirement is that clients should be notified of any such questions about the data reported to them.

There are obviously substantial commercial sensitivities here, and it is generally accepted that laboratories are only required to notify clients after they have carried out a full investigation and determined that there really is a problem.

Accreditation bodies do, however, enforce this requirement rigorously and will not be swayed by arguments about the commercial damage which could result from writing to clients to tell them that their data may be wrong. Indeed, if the assessors find a problem, for example, an un–calibrated instrument, they may well insist on notification of all clients whose data may have been affected as part of the action to clear the non–conformance. Note that this refers only to surveillance visits. Assessors cannot insist on such action at initial assessment since data released prior to accreditation is not covered by the accreditation requirements.


14. Purchasing services and supplies

14.1                   Key Questions

o   Do you take steps to control the quality of any purchased good or services which might affect the validity of your data?

o   Do you have a list of approved suppliers and a policy on their selection?

o   Do you keep this list under review and remove any suppliers found to be inadequate from a quality perspective?

o   Do you have a system for checking all received materials against the order specifications?

14.2                   Requirements

The general requirement is that, where the quality of any outside services or supplies may have an impact on the quality of the data or calibrations emanating from the laboratory, there must be procedures to ensure that the quality of services or supplies is adequate and consistently so.

The two extremes of an approach to quality of supplies and services are for the laboratory to check everything on receipt or for only approved, preferably certified or accredited suppliers, to be used and checks to be dispenses with. In practice, elements of both extremes will be used. Laboratories have approved and trusted suppliers but also carry out checks, often as an integral part of methods, for example reagent blanks or calibration checks.

The onus will be on the laboratory to demonstrate to assessors that it controls the quality of any input, service or supply which could impact data quality.

In practice, the approved supplier position under ISO 17025 is not nearly so big an issue as it is under ISO 9001 since test and calibration methods generally contain quality control checks which, indirectly at least, monitor materials and services obtained outside. No laboratory is, for example, likely to want to argue that running checks, such as reagent blanks, can be dispensed with based on the supplier’s specifications for a reagent, however impeccable the supplier’s quality management certification.

It is when such checks are statistical that the laboratory has to be careful about assessment of suppliers, for example, where rather than running a blank each time a method is run, batch of reagent is sampled and checked for suitability on receipt. The level of checks applied will then need to be defended and the quality assurance capabilities of the supplier will then be an issue. This may even mean different levels of checking being applied to the same materials obtained from different sources.

14.3                   Approved Suppliers

ISO 17025 requires policies for the selection and use of external services and suppliers and records both suppliers used and their quality assurance approval. In practice, this means that the laboratory must have a list of approved suppliers with, where appropriate, a list of the goods or services which each is approved to supply.

There should be a policy for approving suppliers which is normally administered by the Quality Manager. The policy should state that, wherever possible, the laboratory will use suppliers who hold ISO 9001 certification for their quality system, or relevant product certification. Those with such certification would normally gain automatic accession to the approved list. It is the case of non–certified suppliers, the laboratory will have to make its own assessment of their quality assurance capabilities and/or arrange appropriate checks on supplies on receipt.

Indeed, there is no reason at all why the Quality Manager cannot take the view that the checks carried out by the laboratory on receipt of materials or as a part of methods are such that the supplier’s quality assurance capability is irrelevant. However, if this is the position, it is still necessary to list the suppliers used and to note that their use is conditional on the continuation of relevant in–house quality checks on materials.

Checks on the quality assurance status of suppliers who provide services such as instrument repairs must also be considered. Many service agencies and service departments now hold ISO 9001 certification, which simplifies this are considerably. Where there is no certification, the laboratory will need to carry out its own assessment. As with the supply of materials, there is rarely a major issue since a laboratory will generally be able to check an instrument before it is returned to service after a repair, and normal quality control within methods will provide additional security. There may, however, be quality issues surrounding response times to service requests, which will need evaluation.

14.4                   Inspection

There should be a defined mechanism to ensure that any goods received are inspected before being released for use. This should involve a check against the order specification and should pay particular attention to factors such as specified purity of reagents or accuracy of calibration of equipment.

An individual should be made responsible for carrying out the checks, and there should be clear segregation or labelling of goods to distinguish between those which have been checked and those which have not yet been released for use.

In the case of services to equipment, an individual must be given responsibility for deciding on whether the work has been satisfactory carried out before the serviced equipment is put back in use. In the case of laboratory instrumentation, this will typically be the duty of the responsible person.

14.5                   Administration

The Quality Manager should review the approved suppliers list at least annually and should review any supplier immediately if problems become apparent. There must be a general instruction that any staff member who has problems with the quality of supplies, goods, or services, must report them to the Quality Manager. This ensures that all the information comes together at one point. It is not unusual in large organizations for a supplier to be giving small problems in different departments which, when brought together, add up to considerable concern about the supplier’s overall suitability.

A simple strategy for ensuring that the reports reach the Quality Manager is to instruct that problems with suppliers are reported to the Quality Manager.

A mechanism should be established to prevent orders being placed with non–approved suppliers. The usual practice is to require some person in the order processing chain to verify the order against the approved supplier list and to endorse it as satisfactory in this respect. The person charged with this responsibility can be either the person authorizing the order or an administrator who processes the order. In either instance, the quality documentation should confirm the person’s authority to intercept the order.

14.6                   Starting up the Approved Supplier List

A laboratory which is installing a quality system for the first time will already have several suppliers which are routinely used. These should all be contacted with an explanation that the laboratory is seeking accreditation and asked to supply details of their quality management procedures or certifications.

If they are unable to do this, the laboratory need to stop using them, provided that the Quality Manager is satisfied that the goods or services supplied will be of acceptable quality and/or adequately checked before use in a situation where the quality of data is set at risk. A history of use of the supplier with no problems can provide suitable evidence. In such cases, the Quality Manager should prepare a brief report on the supplier which details the history of use, and which shows any supporting evidence for their acceptance. The key point from an assessment point of view is that the laboratory will have to convince the assessors that it has information about the supplier’s ability to maintain quality and so can deal with any local checks needed before goods or services are released for use.

14.7                   Operating within a tendering system

One difficulty which sometimes arises is when laboratories operate in countries where purchases must be via a transparent tendering system. This often means that the laboratory is not able to select suppliers nor, in some instances, influence the selection.

The best way to deal with this problem is to defer evaluation of the supplier until the tendering system is complete. If the supplier is already on the approved list, then there is no problem. If the supplier is not approved, the Quality Manager should evaluate whether the supplier is in compliance with approved supplier status. This means:

a.     Determine whether the supplier has certification.

b.     If the supplier has no certifications, then the Quality Manager should determine what other assurances on quality are available.

c.      In the light of the information obtained, the Quality Manager should then decide on any checks on supplies that need to be made on receipt and before the supplies are released for use.

The strategy to be adopted then be recorded and communicated to relevant laboratory staff.


15.  Sub–contracting

15.1                   Key Question

o   Do you evaluate the suitability of any subcontractors used against a defined set of criteria?

o   Do you have a list of these approved subcontractors?

o   Do you have a mechanism which ensures that the use of any subcontractors found unsatisfactory is discontinued?

o   Do you obtain clearance from clients before sub–contracting work?

15.2                   Requirements

ISO 17025 includes the specific statement that a laboratory is always responsible to the client for the work the subcontractor performs, although a dispensation is allowed where the client insists on a particular subcontractor. This is the key requirement, and it implies that the laboratory investigates and approves its subcontractors, as described in Section 15.2.1.

The overall need is to satisfy assessors that subcontractors are selected after careful investigations and that the laboratory takes a responsible attitude to their selection and continued use. In practice, accreditation bodies do not apply this requirement with great rigour and they are reasonably understanding of commercial sensitivities.

Sub–contracting should be covered by contract review as discussed in Section 7.3. This also takes care of the requirement in the standard to ensure that the client is aware of the arrangement, although it should be noted that ISO 17025 specifically requires that the client be informed in writing of the arrangements for sub–contracting.

In the case of reporting back from subcontractors, a testing laboratory must simply acknowledge that it has been sub–contracted but calibration laboratories are required to supply the client with the sub– contractor’s report or certificate. There is no absolute requirement to notify the client of the identity of the subcontractor although the requirement for calibration laboratories to pass on the subcontractor’s certificate rather negates this. Testing laboratories can, however, reserve this information.

15.2.1    Approved subcontractors

The laboratory must keep a list of approved subcontractors, normally compiled by the Quality Manager. Laboratories which operate a quality system in compliance with ISO 17025 are automatically eligible to appear on this list. They may demonstrate their compliance by providing evidence of accreditation by an accreditation body constituted in accordance with ISO 17011.

In the case of non–accredited laboratories, the Quality Manager will be required to carry out an appraisal, which might include a visit to the subcontractor to carry out an audit, to establish to his or her satisfaction, that the laboratory ISO 17025 compliant in relevant areas. The key issued would be to ensure that the laboratory is suitably equipped, has training procedures, and records, calibrates its instruments traceably and has appropriate internal and external quality control procedures, including participation in relevant interlaboratory proficiency exercises. A report on the appraisal must be kept on file, and the authorization should be reviewed at least annually.

A decision should also be made on whether it is necessary to carry out quality checks on the performance of subcontractors by, for example, the blind submission of quality control samples. These checks may be appropriate when initially evaluating the subcontractor and may also be carried out on an on–going basis. At least occasional use of such checks is strongly recommended where possible, especially for non– accredited subcontractors, since it is likely that assessors will ask the laboratory whether they have any evidence of the quality of the subcontractor’s work.

Another approach sometimes used is to arrange for the subcontractor to forward the results of any relevant interlaboratory proficiency exercises in which they participate. Willingness to do so, in itself, provides confidence in the subcontractor.

15.3                   Administration

There should be a clearly stated policy on who may authorize sub–contracting. This is normally the Laboratory Manager. This person should have the responsibility of checking that the subcontractor is on the approved list and should have the authority to refuse to authorize the sub–contracting if a non–approved subcontractor is requested.

The Laboratory Manager should review the performance of subcontractors on a regular basis, and at least annually, and report any problems to the Quality Manager for investigation.

16. Guidance on writing a Quality Manual

16.1                   Some general considerations

There is no set format for a Quality Manual and it must, of necessity, be a document individual to the laboratory. It is the definitive document: it defines the quality management system and the procedures which implement it.

Accreditation bodies often provide a guidance document suggesting a format, and there is something to be said for following their recommendations. Unfortunately, however, this can lead to a manual which is less than ideal from the laboratory’s point of view and perhaps has the appearance of being designed by a committee, but it will be relatively easy for assessors to review, they can check it off against the guidance document and can hardly object to a manual which follows the document closely.

The contents of the manual and the documents to which it refers will be the basis for all audits and assessments of the quality system. It is, therefore, important that the manual describes the system as it is operated. It must be a working document and not a description of an ideal world.

Be particularly careful that you cover all eventualities. If you assign a responsibility to a post holder, remember that they may not always be available. The manual will need to provide and alternative, such as a deputy or another point of reference, in these circumstances. There should be a statement that all responsibilities ultimately revert to the Laboratory Manager who may delegate them again if necessary. The Laboratory should always try to ensure that the Laboratory Manager and his or her deputy are never absent at the same time.

Another important point to remember is that responsibility and authority go together. Whenever you assign a responsibility, you must also assign the appropriate authority. In the context of quality management this may involve giving a Quality Manager authority on quality matters over a line management superior. The superior, in supporting the quality policy, should respect this authority.

You should be careful in writing the manual not to create procedures and policies which are bound to fail. Within the limits of the standard, give yourself as much flexibility as possible. You may describe preferred courses of action but allow alternatives under defined circumstances if it is clear who has the authority to permit the alternative to be taken.

For example, a supplier policy may state that the preferred suppliers would be ISO 9001 certified, but the alternatives may be used where the goods are not available from such an accredited supplier. The policy might then go on to state that the Quality Manager may give approval for the use of an alternative provided that the goods are checked before being used.

The next section gives one suggested outline for a Quality Manual that describes an ISO 17025 compliant system, with some notes on the contents of each section. It is not essential that the information described appear explicitly in the manual. Subsidiary documentation can be used and referred to. Be careful, however, not to create an unnecessarily complex documentation structure as this will be difficult to maintain. Another trap to avoid is that of duplicating information across different documents since it is difficult to ensure that the version in the various documents is all maintained together and remain consistent.

Finally, remember that an assessment will be against your quality documentation in addition to against the standard. This means that if you make a commitment in your documentation which goes beyond the requirements of the standard and then fail to meet it you will still have non–conformance, even if what you are doing is within the standard.

For example, ISO 17025 requires an annual review of the quality system. If you were to enter a commitment to a six–monthly review in your quality manual but what you do is review annually, you would be meeting the standard but would still have a non–conformance against your documentation.

The rule is not to commit to anything beyond the standard, even if you intend to go beyond the standard. In the example given, the quality manual should say that you will review “at least annually.” This gives you flexibility and conformance with the standard.

16.2                   An outline for a Quality Manual

16.2.1    Quality policy statement and accreditation

This should be made on the authority of the most senior management body for the laboratory. This must be at the level where decisions on resource allocation are made. It should contain a commitment to quality, to good professional practice and to a quality management system based on ISO 17025. It should also contain a commitment to provide resources to support this level of quality.

It is also conventional to include a commitment to provide only one level of service for tests and calibrations within the accreditation scope. This requirement arises from a situation where some laboratories were offering to carry out work either to an accreditation standard or to a lower standard with a price differential. Accreditation bodies insist on a single level of service since, otherwise, a laboratory might use its accreditation to attract the work and then offer an inferior and cheaper service. It is, however, possible to be accredited for the same test or calibration to different levels of accuracy, for example, but it is hard to see any advantage in this in most cases.

The policy statement should carry the name, position, and signature of an appropriate representative of the senior management body, ideally the chief executive, and should explicitly give authority to the Laboratory Manager and the Quality Manager to implement and operate the quality system. It should also require all personnel to familiarize themselves with the quality documentation and to always follow its requirements.

The policy statement should be followed by a reference to any accreditation held by the laboratory and a reference to an appendix containing the scope of accreditation or the scope which is the basis of any pending application for accreditation.

16.2.2    Organization and Management

This section should show the internal organization of the laboratory and the relationship between the laboratory and any organization of which it is a part. It is a good idea to include organizational charts. These charts should show that the Quality Manager has access to the highest level of management and the Laboratory Manager.

Each level of staff should be described, with an outline of the level of experience and qualifications required to fill each grade. The object of this is to set a minimum acceptable level of expertise at each level which the laboratory undertakes to maintain, but the description should allow sufficient flexibility to admit staff with specialized but narrow capabilities, where required.

The supervisory requirements at each grade should be defined, for example an assistant chemist must always work under the direct supervision of a chemist or higher, and the limits of responsibility and authority of each grade should be clearly explained.

There should be a statement of the policy on the use of staff undergoing training and a requirement for their direct supervision.

Reference should be made to the staff records or equivalent source as containing a list of current post holders.

16.2.3    Job Descriptions

This section should contain full job descriptions of key staff. This must include the Laboratory Manager and the Quality Manager and their deputies. It should make clear what the responsibilities of each post are and what functions each performs.

Any other key posts should be included. Some laboratories, for example, assign duties to a calibration officer who is responsible for calibrating all instruments and maintaining reference standards.

16.2.4    Approved Signatories

This section must define precisely, either by name, seniority or post, the individual who are authorized to take responsibility for the laboratory’s data. Only these individuals may authorize the release of work and sign test/calibration certificates.

16.2.5    Acceptance of work

This section should make clear exactly who may accept work and commit the laboratory to a delivery date. It should state that the person accepting the work is under an obligation to ensure that the laboratory has the equipment and expertise to do the work and that they must not enter a commitment unless they can be certain on this point. The formal contract review process can be described here.

16.2.6    Quality Documentation

The structure of the quality documentation should be defined. This will normally be a hierarchy, headed by the quality manual, which refers to the methods manual or equivalent technical and other procedural documentation.

Reference should be made to the subsidiary records and documentation such as the equipment logs and the staff records.

The purpose of each piece of documentation must be defined as well as the person responsible for maintaining it and authorizing it to be issued. The availability of each document should be stated, for example whether it is issued and to whom, in whose custody it is kept, where it is kept and who has right of access.

There should be an instruction to all staff to abide by the documented procedures. This should be qualified by allowing, for example, the Laboratory Manager to permit departures from documented procedures where technical considerations make this expedient, provided that he or she is confident that quality will not be undermined as a result. There must be an instruction that all such departures must be recorded and noted on reports where relevant.

Staff inadvertently deviating from documented procedures must be instructed to bring this to the attention of the Laboratory Manager, who must decide whether quality is compromised and what action is needed. All such departures must be recorded.

16.2.7    Document control

The controlled document system should be described and the responsibility and authority of the Quality Manger in this respect defined.

16.2.8    Scope of Tests/Calibrations

This should state the laboratory’s policy to use internationally recognized methods wherever possible, supplemented by fully validated and documented in–house methods. This section should also include or refer to a list of typical sources for methods appropriate to the laboratory’s scope of activities.

16.2.9    Test/Calibration Methods

There should be a description of the procedure for introducing a new method. This will generally involve the Laboratory Manager in arranging to validate and document the method. The Quality Manager should approve the validation and documentation before the Laboratory Manger released the method. An outline of the format for in–house documented methods should be given, and a procedure for the withdrawal or amendment of a method described.

16.2.10Equipment and Reference Standards

This section should list the major items of equipment which the laboratory operates, and the reference standards held. This can be expressed in general terms and reference made to the equipment logs as a full inventory. The format and operation of the equipment logs should be described and the procedure for checking and accepting a new piece of equipment into service, as well as the procedure for the withdrawal of equipment.

16.2.11 Calibration Policy

There should be a statement of the policy of the laboratory to achieve traceability of all measurements using traceable (to SI units where relevant) standards of measurement and certified reference materials. Where there is not achievable there should be a commitment to interlaboratory calibration exercises and similar measurement audits.

The policy that references are to be used for calibration only and not for routine purposes should be stated.

16.2.12 Calibration and Maintenance of Instruments

There should be a general statement of the policy to calibrate at intervals such that the integrity of measurements is not set at risk. The preferred procedures for determining calibration intervals should be stated. Reference should be made to the procedural documents which describe instrument calibrations.

The procedure of labelling equipment which requires calibration should be described and instructions not to use equipment has an expired calibration must be given.

A general responsibility should be placed on all staff to ensure that any instrument or piece of equipment which they suspect is out of calibration is not used until checked. The equipment must be clearly labelled as suspect and not to be used. The problem should be brought to the attention of the Laboratory Manager.

16.2.13 Methods and uncertainty of measurement

This section should describe the laboratory’s policy and procedures on the determination of method performance validation and on assessing uncertainty of measurement.

There should be a description of procedures to be used at initial validation of methods and a description of the responsibility of the Laboratory Manager for updating the information based on QC data.

The section should also give guidance on the general policy of the laboratory on the frequency of running QC samples, spikes, and duplicates.

16.2.14 Quality Control

There should be a general statement on which level of staff or individuals are permitted to judge whether results meet quality control criteria. Reference should be made to the fact that methods documentation includes details of the quality control data to be collected and the criteria to be applied.

The general responsibility of the Laboratory Manger to monitor and act upon quality control data should be stated.

There should be a commitment to interlaboratory proficiency checking exercises and/or measurement audit and a list of such exercises in which the laboratory typically participates.

16.2.15 Procedure when data is suspect.

The procedure to be followed when a suspicion that faulty data has been released should be described. This will normally require investigation by the Laboratory Manager and Quality Manager and probably an audit. Corrective action would also normally be required.

The laboratory’s policy to inform clients as soon as possible of suspect data must be stated with a commitment to check the data and, if necessary, to issue an amended report.

16.2.16 Handling of samples and administration of work

This section should have a complete description of the laboratory’s procedure for receiving, storing, and recording samples, sample numbering and labelling, allocation of work, recording of results, quality checking of results, preparation of reports and issuing reports.

In writing this section try to describe in a systematic way the manager in which samples and results are managed through your laboratory. Pay particular attention to how the clients’ requirements are communicated to the bench workers and how the bench workers pass the results back the reporting process.

16.2.17 Recording of Results

This section should describe the use of worksheets and/or notebooks. Instructions on the use of ink and the way of making corrections should be given.

16.2.18 Disposal of samples and other waste

The laboratory’s policy on the length of time samples is kept should be stated, as should the policy on disposal, with a commitment to the responsible disposal of toxic materials.

16.2.19 Records

The laboratory policy on the retention of records should be stated and the procedure be followed in disposal of records must be given. This should define who may authorize disposal and require that an inventory be kept of the records disposed of.

The policy on security of records, including computer data, must be stated, and the person responsible for archiving and computer back– up identified.

16.2.20 Reporting of Results

The minimum requirement for the contents of a report should be given (see Section 13.2) and an example of the preferred layout included.

The requirement to identify sub–contracted results must be stated. Where the laboratory holds accreditation there must also be a stated procedure for identifying results of methods not included in the accreditation scope.

The procedure for retaining confidentiality when reporting results other than by post should be set out.

Where reports have to be amended there must be a statement that this can only be done by the issue of a complete new version with an endorsement such as “Amendment to Certificate No. …..”

16.2.21 Quality Incidents, Complaints and Control of Non–conforming Work

The laboratory’s policy to treat complaints positively and as a source of useful information should be stated. The persons authorized to deal with complaints should be identified and the procedure for recording complaints and following them up defined, including the requirements for corrective action.

The system for dealing with internally detected quality problems and incidences of non–conforming work need to be clearly described. This should include assignment of responsibility for ensuring work is suspended pending an investigation and the carrying out of corrective action. The person responsible for allowing work to be resumed needs to be identified.

16.2.22 Confidentiality

The laboratory’s policy to retain confidentiality should be stated. Instructions must be included that all staff must take all reasonable precautions to keep client’s data and other information confidential. The requirement to ensure that no such information is left out in the laboratory overnight or in an unattended room should be stated.

16.2.23 Staff appointment, training, and review

The operation of the staff records must be described including their use for recording new staff and changed in the training or status of existing staff.

The mechanism for selecting staff for training, carrying out the training and assessing competence and for issuing authorization to carry out tests, calibrations and other procedures should be described.

The mechanism for an annual review of staff capabilities must be laid out and the means of recording the results.

Staff should be generally instructed of their responsibility to carry out only operations for which they are authorized. It should be made clear that staff are entitled to refuse to do work for which they are not authorized.

16.2.24 Procedures for audit and review of the Quality System

All the procedures for the audit and review of the quality system should be described together with the records to be kept, and the policy on frequency of audits and review included.

16.2.25 Corrective action

The procedure for agreeing and recording corrective action should be described, as well as a description of the procedure for follow up to ensure corrective action is complete and has been effective.

16.2.26 Preventive action and Improvement

The procedure for identifying preventive action and quality improvement opportunities should be described, and the responsibility for evaluating suggestions and carrying out the preventive action assigned.

16.2.27 Premises and Environment

The laboratory premises should be described and, ideally, a plan included. This section should also draw attention to any parts of the premises to which access is restricted and who is authorized to grant access and should describe any areas subject to special environmental controls as well as the mechanism for monitoring, recording and maintaining such control.

Where laboratories conduct activities which are incompatible, for example trace and high-level analysis for metals, there should be a description of the facilities provided to ensure the necessary segregation.

16.2.28 Security of Premises

This section should describe the arrangements for the security of the premises during and outside working hours, identify the persons authorized to hold keys, describe the procedure for granting authorization, and identify the person with overall responsibility for security.

16.2.29 Appendices

There should be appendices covering, as a minimum, the following topics:

a.     A list of the scope of accreditation held or applied for.

b.     A list of holders of the quality manual.

c.      A list of all controlled documents and subsidiary documentation together with their scope of issue or storage locations.

d.     Examples of pro–formas for recording quality issues such as audits, corrective and preventive action and client complaints.

e.     An example of the laboratory’s proposed report format.

No comments: