Adaptable

Those aspects of a service that may be altered, refined or adapted in order to foster greater engagement, retention or satisfaction of those in receipt of a service (yet do not disrupt the underlying core mechanisms of the service or intervention).

Adaptable practice

Those aspects of a service that can be adapted without compromising its core components. Adaptable components can enable practitioners to tailor a service to fit the unique requirements of the local context and beneficiaries of replication.

Adherence

A dimension of fidelity. Refers to whether the core components of a programme are delivered as designed, to those who are eligible for the service, by appropriately trained staff, with the right protocols, techniques and materials and in the prescribed locations or contexts.

Affiliation

When an official on-going relationship with independent individuals or organisations is formed to help them implement a replication. There is generally a legal framework involved that sets out the nature of the relationship. Often there is a financial relationship between the two parties involved, normally with the originator charging a fee to implementers but with a number of other ways the finances can work.

Attribution

In the context of evaluation, this refers to whether or not changes in beneficiary outcomes may be explained or accounted for by a service or activity. A lack of attribution means that it is not possible to know whether or not any changes in beneficiary outcomes were the direct result of the service or activity, or would have otherwise occurred.

Baseline

A measurement of participant characteristics and outcomes taken at the beginning of a study – in the case of an impact evaluation, before the intervention is implemented.

Break-even analysis

An analysis that calculates a break-even point at which a profit begins to be made per unit. In the context of cost-benefit analysis, this point is shown in terms of the size of an effect on outcomes that would yield sufficient monetary benefits to break-even after accounting for unit costs.

Business case

A business case provides justification for a proposed project or programme. Ideally it includes an analysis of costs and likely benefits, as well as a detailed budget, and also evidence of the need and demand for the service.

Client management information system

A database that allows projects to view their real time data on outcomes, fidelity monitoring, quality assurance processes and other delivery data such as costs and staffing. High quality systems will typically allow users to view data in a visual format (graphs, charts etc) and enable data to be analysed and presented in a variety of ways (by delivery year, project type, outcome etc). These systems are useful for monitoring children’s outcomes as they progress through a programme, monitoring the quality of delivery across multiple sites, and testing the results of adaptations to programme components.

Commissioner

Responsible for the strategic allocation of public funds to projects, programmes or services that best address the needs of children, young people and families in their geographical and service area (for example Children’s Services, Health, Education, Youth Justice etc). The priorities of commissioners are to engage services that represent good value for money as well as quality delivery and increasing the likelihood of positive impact.

Control group / comparison group

A group of participants within an experimental evaluation who do not receive the programme or service under evaluation, in order to measure the outcomes that would have occurred without the presence of the programme.

Core components

The key activities that make the service work. Put another way, the specific aspects or mechanisms of a service that lead to the desired change in outcomes. For a service to be replicated successfully, providers need to be clear about what can and cannot be changed.

Cost-avoidance

Refers to actions taken to reduce future costs. Cost-avoidance as a value is the difference between what is actually spent and what would have been spent had no avoidance measures been implemented.

Cost-benefit analysis

The estimation of financial returns on an investment or service. Returns are typically estimated for individual recipients of service, agencies providing the service and the state. Cost-benefit analyses rely upon accurate cost information and robust evidence of impact (ideally from experimental evaluations). Cost-benefit analysis may produce a calculation of net cost (benefits minus cost) or the ratio of costs and benefits.

Data sharing

The lawful and responsible exchange of data and information between various organisations, people and technologies.  Delivery and Impacting reporting  system / Client management information system.  Typically a web-based system that allows projects to view their  real time data on outcomes, fidelity monitoring, quality assurance processes and other delivery data such as costs and staffing. These systems are useful for monitoring children’s outcomes as they progress through a programme, monitoring the quality of delivery across multiple sites, and testing the results of adaptations to programme components

Demand

In the context of social interventions the number of individuals who (a) match the particular target group within a given population and (b) actually want to participate in the programme.

Dissemination

In this replication model the developer creates resources that enable an independent other to implement the venture in a new location. There is a loose relationship between the originator and the implementer. In some cases a fee may be charged for materials or advice but there is generally no on going financial or legal relationship between the two parties.

Direct unit cost

Those financial costs directly related to a service, typically borne by the lead organisation when setting up and running the intervention (including, for example, staff time, training costs, materials and capital costs, overheads).

Early intervention

Intervening in the early stages in the development of difficulties (not necessarily at an early age). Early intervention activities or services seek to stop the escalation of difficulties with the aim of promoting subsequent health and development.

Eligible young people

Those young people who fit the target criteria for a specific service or programme. This could be based upon factors such as their age or gender, or relate to the difficulties they may be experiencing such as homelessness, conduct disorder, or educational problems. Those young people who are eligible for a service or programme should be the same young people who are likely to benefit most from receiving it.

Evaluation

Various aspects of a programme can be evaluated, including the process of delivery, user satisfaction and impact. Here evaluation refers to the use of social research procedures to investigate systematically the effectiveness of programmes or services in terms of improving children’s health and development.

Evidence

Generally speaking evidence is information that acts in support of a conclusion, statement or belief. In children’s services this tends to be information indicating that the service works, i.e. is achieving the intended change in outcomes. We take a broader view in that evidence may support or challenge other aspects of service delivery, such as quality of implementation, reach and value for money.

Evidence-based programme

A discrete, organised package of practices or services – often accompanied by implementation manuals, training and technical support – that has been tested through rigorous experimental evaluation, comparing the outcomes of those receiving the service with those who do not, and found to be effective, i.e. it has a clear positive effect on child outcomes. In the Standards of Evidence developed by the Dartington Social Research Unit, used by Project Oracle, NESTA and others, this relates to ‘at least Level 3’ on the Standards.

Evidence-Confidence Framework

The Realising Ambition ‘Evidence-Confidence Framework’ is a tool that can be used to help judge the strength and overall balance of different types of evidence for a particular service being replicated, and to identify areas of development and opportunity. It is structured around a five-part definition of successful replication: (i) a tightly defined service; (ii) that is effectively and faithfully delivered to those that need it; (iii) evidence is used to learn and adapt, as required; (iv) there is confidence that outcomes have improved; and (v) the service is cost-beneficial and sustainable. A simple five-point colour grading system is used to grade the strength and quality of each type of evidence: the lightest blue representing the strongest evidence and the darkest blue the weakest.

Evidential tapestry

Replication requires a range of evidence to support both its justification, and to maintain high quality delivery. For example, not only is evidence of impact important for understanding the outcome of a service, but it is also useful in justifying the replication of a service in a new area. Alongside this can be evidence of the need for the service and demand for it in a local area. Evidence can also relate to delivery quality and fidelity to the model. Different types of evidence, all varying in quality and utility can provide answers to a range of questions helpful to practitioners and managers delivering services for children and families. When put together, this range in depth, quality, and breadth forms an ‘evidential tapestry’.

Experimental Evaluation / Robust Evidence of Impact

An evaluation that compares the outcomes of children and young people who receive a service to those of a control group of similar children and young people who do not. The control group may be identified by randomly allocating children and young people who meet the target group criteria – a randomised controlled trial or RCT -, or by identifying a comparable group of children and young people in receipt of similar service – a quasi-experimental design or QED.

Exposure / Dosage

Refers to the “amount” of programme or service a person receives. This could be the number of total sessions attended, the length of those sessions, or how frequently they took place.

Feasibility study

Examines the practicality of an intervention with a view to refining it. It looks at the acceptability of and engagement with the intervention as well as adherence in delivery and viability of implementation

Fidelity / Faithful delivery

The faithfulness to the original design and core components of a service. This can be assessed by fidelity monitoring tools, checklists or observations.

Fidelity monitoring tools

Typically, these are checklists or observations which enable practitioners, programme managers, or researchers to monitor whether or not a programme is being delivered faithfully, according to its original design.

Formative evaluation

An evaluation that takes place before or during the implementation of a programme or service to improve the quality of its design and delivery. This type of evaluation is useful for providing on-going information and feedback to staff, and can also be useful in observing changes that take place after adaptations or modifications to a programme have been made (see also summative evaluation).

Full unit cost

Full unit costs include not only direct costs but also indirect costs: those indirectly related to the service, typically as a result of the interactions between the service and other stakeholders, and borne by organisations other than the lead organisation.

Funder

Typically an organisation – foundation, charitable trust, or other philanthropic entity – that seeks to support social change through the funding of programmes, projects or services aimed at addressing “social problems”. Usually these organisations are focused on particular outcomes such as reducing inequality and homelessness, tackling the causes of gang violence, improving mental health support etc.

Fidelity / Faithful delivery

The faithfulness to the original design and core components of a service. This can be assessed by fidelity monitoring tools, checklists or observations.

Impact

The impact (positive or negative) of a programme or service on relevant outcomes (ideally according to one or more robust impact evaluations).

Implementation

The process of putting a service into practice. Implementation science explores theory and evidence about how best to design and deliver effective services to people.

Implementation handbook

A document that describes the processes and agreements for replicating an intervention in a new context. Typically it would include information on the structure and content of the programme, its intended outcomes and the resources needed to deliver it.

Indirect unit cost

Costs indirectly related to the service, typically as a result of the interactions between the service and external stakeholders, and borne by organisations other than the lead organisation, for example schools, including the value of the volunteers and beneficiaries’ time

Intervention specificity

Relates to the design of an intervention and whether it is focused, practical, logical and based on the best available evidence.

Innovation

The process of translating a new idea into a service that creates value for the intended beneficiaries and which can be funded or commissioned.

Licensing

Usually involves being granted a license to provide a service or sell a product, rather than an entire business format or system. The relationship between a licensing organisation and licensee is also looser than its franchising equivalent. This usually means a much smaller package of training and support (and not ongoing), and often no ongoing fees payable after the initial license purchase.  Moreover, licensees will usually not receive exclusive territorial rights, and the granted rights are usually more limited.

Logic model

A typically graphical depiction of the logical connections between the resources, activities, outputs and outcomes of a service. Ideally these connections will have some research underpinning them. Some logic models also include assumptions about the way the service will work.

Manual

A document that covers all the things about a programme or service that are relevant wherever and whenever it is being implemented. This includes the research base for the programme, the desired outcomes, the logical connection between activities and these outcomes, the target group and all of the relevant training or delivery materials (see also ‘Implementation handbook’).

Need

In relation to services for children and families, this refers to how many individuals in a specified population match the target group for the programme.

Outcomes

Outcomes refer to the ‘impact’ or change that is brought about, such as a change in behaviour or physical or mental health. In Realising Ambition all services seek to improve outcomes associated with a reduced likelihood of involvement in the criminal justice system.

Outcome monitoring tools

Within the context of services for children and their families, these are typically questionnaires, structured interviews, or observations completed by young people or their parents, practitioners or researchers on a range of indicators of emotional and physical well-being and development.

Participants

In the context of research, participants are individuals who agree (provide voluntary consent) to take part in a study, and should be distinguished from service users – in a trial, some but not necessarily all users of an intervention will be participants, and consenting individuals who do not receive the intervention because they are in the control group are also participants.

Pre-service intervention questionnaire

In the context of routine outcome monitoring or experimental evaluation, a baseline questionnaire completed shortly before any service provision takes place.

Post-service intervention questionnaire

In the context of routine outcome monitoring or experimental evaluation, a follow-up to baseline questionnaires completed shortly after the conclusion of service provision (further follow-ups may also be undertaken).

Promising service / intervention

A tightly defined service, underpinned by a strong logic model, that has some indicative – though not experimental – evidence of impact. In the Standards of Evidence developed by the Dartington Social Research Unit, used by Project Oracle, NESTA and others, this relates to ‘Level 2’ on the Standards.

Randomised Controlled Trial (RCT)

An evaluation that compares the outcomes of children and young people who receive a service to those of a control group of similar children and young people who do not. Within an RCT the control group is identified by randomly allocating children and young people who meet the target group criteria to either the service receipt or control groups.

Rapid cycle testing

An approach, widely used in healthcare innovation, that implements and then tests small changes in order to accelerate service improvement efforts. It builds upon and operationalises the ‘Plan > Do > Study > Act’ (PDSA) cycle. It promotes rapid iteration in order to support improvement and delivery at scale.

Realising Ambition Outcomes Framework

A measurement framework and set of associated tools designed to support delivery organisations to identify and measure the beneficiary outcomes most relevant to their  work. The Realising Ambition framework comprises five broad outcome headings: (i) improved engagement with school and learning; (ii) improved behaviour; (iii) improved emotional well-being; (iv) stronger relationships; and (v) stronger communities. Under each of these five headings are a number of specific indicators – 31 in total. Each indicator is accompanied by a short standardised measure that may be completed by children and young people before and after service delivery.

Reliability

In the context of outcome measurement, the degree to which a standardised measure consistently measures what it sets out to measure.

Routine outcome  monitoring

The routine measurement of all (or a sample) of beneficiary outcomes in order to: (i) test whether outcomes move in line with expectations; (ii) inform where adaptations may be required in order to maximise impact and fit the local delivery context; and (iii) form a baseline against which to test such adaptations.

Replication

Delivering a service into new geographical areas or to new or different audiences. Replication is distinct from scaling-up in that replication is just one way of scaling ‘wide’ – i.e. reaching a greater number of beneficiaries in new places. (See definition of ‘scale’).

Scale

A service is ‘at scale’ when it is available to many, if not most, of the children and families for whom it is intended within a given jurisdiction. Service  delivery organisations can scale ‘wide’ by reaching new places, or scale ‘deep’ by reaching more people that might benefit in a given place. Replication is one approach to scaling wide.

Replication model

The approach to delivering a service into new geographical areas or to new or different audiences.

Social franchising

Where the owner of an intervention enters into a legal agreement with another person or organisation (the franchisee) which grants that franchisee a licence to use its systems, brand and other intellectual property, and to use those to operate on an identical basis in a particular area. The franchisor teaches the franchisee the entire business format, and provides support via training and communications to the franchisee for the duration of their business relationship. In return for these systems and services, the franchisee pays an initial fee and on going fees to the franchisor.

Standardised measure

A questionnaire or assessment tool that has been previously tested and found  to be reliable and valid (i.e. consistently measures what it sets out to measure).

Standards of Evidence

The Standards of Evidence are set of criteria by which to judge how tightly defined and ready for wider replication or implementation a particular service is. They also assess the strength and quality of any experimental evidence underpinning a service. The standards form the basis of the Investing in Children ‘what works’ portal for commissioners that provides a database of proven services for commissioners of children’s services. The Standards have also underpinned numerous others, including the Project Oracle and NESTA Standards of Evidence.

Start-up costs

The total cost of setting up a project, programme or service in a new area. Start-up costs typically include capital costs such as IT equipment, planning and training costs, consultancy, recruitment, licensing and legal costs.

Surface adaptations

Aspects of the service that can be adapted to fit local contexts. These are peripheral components that do not directly alter the core aspects of the service that make it work. Surface adaptations may allow providers in other areas to make the service ‘their own’ and better serve the needs of local populations.

Scale

A service is ‘at scale’ when it is available to many, if not most, of the children and families for whom it is intended within a given jurisdiction. Usually this requires that it be embedded in a public service system. Service delivery organisations can scale ‘wide’ by reaching new places, or scale ‘deep’ by reaching more people that might benefit in a given place. Replication is one approach to scaling wide.

Service designer

Within the context of services for children and families, any individual or organisation responsible for conceiving, planning and constructing a service or programme aimed at preventing or ameliorating the difficulties or potential difficulties of children and families. Ideally service designers balance science and knowledge of ‘what works’ alongside expertise in user engagement and co-production.

Start-up costs

The total cost of setting up a project, programme or service in a new area. Start-up costs typically include capital costs such as IT equipment, planning and training costs, consultancy, recruitment, licensing and legal costs.

Summative evaluation

An evaluation carried out typically at the end of a delivery cycle in order to establish the outcomes of a programme against its original objectives, how effective adaptations may have been, and to inform decisions around whether a programme should continue to be delivered or whether further adaptations should be made (see also ‘formative evaluation’).

Surface adaptations

Aspects of the service that can be adapted to fit local contexts. These are peripheral components that do not directly alter the core aspects of the service that make it work. Surface adaptations may allow providers in other areas to make the service ‘their own’ and better serve the needs of local populations.

Tightly defined service

Successful interventions are clear about what they are, what they aim to achieve and with whom, and how they aim to do it. A tightly defined service is one which is focused, practical and logical.

Unit costs

The cost of everything required to deliver a programme to a participant or a family. A unit cost is normally expressed as an average cost per child or family, but can also be expressed as a range (for example, unit costs ranging for “high need” to “low need” cases).

Universal service

A service or activity that is provided to all within a given population or location. There are no inclusion or exclusion criteria.

User engagement

A dimension of fidelity. This refers to the extent to which the children, parents or families receiving a programme are engaged by and involved in its activities and content. How consistently do participants stick with the programme? Do they attend? Do they like it? Do they get involved? Without high levels of user engagement, it is unlikely that programmes will achieve their desired impact.

User satisfaction

Refers to whether children and families in receipt of a particular service are satisfied with the delivery and outcomes of that service. Did they feel they received enough sessions, that they established a good relationship with practitioners? Did they feel like the programme helped to deal with the difficulties they were facing, or prevented the occurrence of others? User satisfaction is typically captured upon completion of a service or programme.

Validity

In the context of outcome measurement, the degree to which a standardised questionnaire or tool measures what it sets out to measure (i.e. it does not inadvertently measure some related but spurious construct).

Value for money
The optimal use of resources to achieve intended outcomes. The National Audit Office typically use three criteria to judge value for money: ‘economy’ (minimising the cost of resources used or required – spending less); ‘Efficiency’ (the relationship between the output from goods or services and the resources to produce them – spending well); and ‘effectiveness’ (the relationship between the intended and actual results of public spending – spending wisely).

Wholly owned

Involves a structure in which the organisation creates, owns, and operates the replicated service. This is sometimes referred to as a branch replication model.