176
SCHOOL OF PHYSICS AND ASTRONOMY FIRST YEAR LABORATORY PX 1123 Introductory Practical Physics I PX1223 Introductory Practical Physics II Academic Year 2015 - 2016 NAME: Lab group:

SCHOOL OF PHYSICS AND ASTRONOMY FIRST … · FIRST YEAR LABORATORY PX 1123 Introductory Practical Physics I ... (in the form of lab ... When you submit1 coursework there is …

  • Upload
    dodang

  • View
    216

  • Download
    1

Embed Size (px)

Citation preview

SCHOOL OF PHYSICS AND ASTRONOMY

FIRST YEAR LABORATORY

PX 1123 Introductory Practical Physics I

PX1223

Introductory Practical Physics II

Academic Year 2015 - 2016

NAME: Lab group:

1

2

Welcome to the 1st year laboratory, IntroductoryPractical Physics I & II, modules PX1123 in the Autumn semester and PX1223 in the Spring semester. You will need to bring this manual with you to every laboratory session, as you will find all the relevant information you need for the laboratory classes. It is essential that you read carefully through the manual as it contains: the instructions that you will need to follow in order to undertake the individual experiments; logistical information; tips on how to keep your laboratory diary and how to write up your end-of-term reports; background notes on fundamental topics with which you need to be familiar; and health & safety issues that relate to the experiments themselves. You are expected to have pre-read each relevant section prior to coming to your weekly laboratory session and to have written your Risk Assessment and Aims. This manual is divided into 3 sections, described in more detail overleaf, and should be your first port of call for any information about the laboratory work. If you cannot find the information that you are looking for, please ask any member of the teaching team - your Lab Supervisor, the demonstrators or the module organizer (Prof. C.Tucker, room N0.38).

Lab Supervisor: Contact email: Demonstrators:

3

4

CONTENTS:

I: Introduction and logistics of the 1st Year laboratory 5

1. Organisation and administration of the laboratory 5

2. Laboratory diaries 8

3. Formal Reports 13

4. Safety In the Laboratorys: Risk Assessment and Code of Practice 19

II: Experiments 21

Timetable and list of experiments 21 Check list for experiments 22

Laboratory notes for experiments 23 - 114

III: Background notes 115 III.1 Background notes to experiments 115

Introduction to electronics experiments How to use a Vernier scale The oscilloscope The multimeter

III.2 Analysis of experimental data: Errors in Measurement 127 III.3 Use of Microsoft Word & Excel 2007 158 III.4 Reporting on experimental work 161

An example of how to write a long report 166 Checklists 174

5

I: INTRODUCTION AND LOGISTICS OF THE 1ST YEAR LABORATORY 1. ORGANISATION AND ADMINISTRATION OF THE LABORATORY 1.1 INTRODUCTION There are 11 laboratory sessions in the Autumn Semester and 11 in the Spring Semester. They are designed with several objectives.

1. To provide familiarity and build confidence with a range of apparatus. 2. To provide training in how to perform experiments and teach you the techniques

of scientific measurement. 3. To give you practise in recording your observations and communicating your

findings to others. 4. To demonstrate theoretical ideas in physics, which you will encounter in your

lecture courses. 5. To understand the important role of experimental physics

The majority of the work you will do in the laboratory will be experimental, and will be performed individually. However there will be 1 or 2 sessions designed to give you practise on experimental technique, the handling of errors, writing of formal reports and a small number of group experiments. 1.2 ATTENDANCE Class Times. Labs run from 13:30 to 17:30 on Monday, Tuesday and Thursday afternoons. Students will be assigned one laboratory afternoon. Attendance at Laboratories. Experimental physics forms an important part of all degree programmes offered by the School of Physics and Astronomy and is a requirement for Institute of Physics accreditation. Attendance at all scheduled laboratory classes is compulsory. Unscheduled absence from laboratories will lead to loss of marks and possibly failure of the module. It is not always possible to offer summer resits in laboratory-based modules (see UG Student Handbook Appendix 1). There are safety issues concerning work in laboratories. You will be given instruction in safe working practices and in Risk Assessment. It is a requirement of progression that students have undertaken safety training in Year 1. The PX1123 laboratory module is a required module and you will not be allowed to progress to the next year of study unless you pass this unit of study. Registration. Attendance will be recorded. Students are expected to sign out of the laboratory if leaving before the end of the session.

6

In addition, lab demonstrators and/or supervisors will sign off your lab book at the time you leave the session – in order for us to assess how much work is required outside of contact hours. 1.2 SUBMISSION OF COURSEWORK. Laboratory modules are assessed 100% through continual assessment (in the form of lab

diaries and formal reports). You will be informed at the start of the module when

coursework will be distributed and what are the submission dates (and times); these

deadlines are final and late submission will be awarded zero marks without exception.

Coursework is submitted in two ways, either through the “post boxes” near the General

Office or electronically through Learning Central. Major pieces of writing (e.g. your

formal laboratory report) are submitted electronically to Turnitin, an electronic system

which helps identify plagiarism.

When you submit1 coursework there is an implicit agreement between you and the

University that, unless stated to the contrary, any work you submit is exclusively your own

work and that no part of the work has previously been submitted for assessment.

When submitting work electronically, you are advised to submit the work in good time

just in case of last-minute Internet or computing failures. You will not be able to submit

work beyond the deadline set by the Module Organiser.

Requests for Extensions to Deadlines.

If circumstances are such that you will not be able to meet deadlines for submission of

coursework or attend a scheduled laboratory class, you should submit documented

extenuating circumstances requesting an extension. You should always try to make this

application prior to the deadline (even if it means submitting documentary proof after the

event). You are advised to read carefully the guidance notes on extenuating

circumstances to ensure that your requests and documentary proof are likely to be

accepted. Note that requests for extensions resulting from poor time-management,

computing problems or requests to attend sporting or cultural events or for holidays or

travel etc. will not be accepted.

We are normally able to offer extensions of one week beyond the published deadline but

only rarely will longer extensions be granted. If you miss a scheduled laboratory session

through legitimate extenuating circumstances, we can usually arrange for you to attend

another session or additional sessions at the end of the semester. If requests for

extensions are submitted more than one week after the published deadline for

coursework submission these will normally be rejected and you will be awarded zero

marks for that piece of work.

1 “Submission” is defined as presenting work for assessment in any form, including paper-based written work, electronic documents or words/images used in oral presentations.

7

Avoidance of Plagiarism.

Plagiarism is the act of passing off the words or ideas of others as if your own. Advice on

the avoidance of plagiarism is given in the UG Student Handbook (Appendix 2). There is

also considerable help and advice on Learning Central and the University web site. You

should be especially careful of plagiarism in computing tasks and you are advised not to

share code through electronic means.

Resit opportunities in laboratory-based modules:

The following is extracted from Appendix 1 – Examining Board Rules and Conventions

from the UG Student Handbook, which should be read in full.

1. Students are expected to attend all scheduled laboratory classes.

2. Subject to the attendance requirements stated in Clause 3 and 4 below, students

must have acquired a minimum module mark of 30 if they are to be offered

summer resits. Failure to reach this threshold will require re-assessment of the

whole module in the coming academic year. The resit module mark will be capped

at 40%.

3. Students who fail to attend at least 80% of their scheduled laboratory classes will

be deemed to have failed that module. In such cases, if a student has acquired a

mark of 40% or greater, the Module Organiser will return a mark of 39% Fail. If the

student has acquired a mark of less than 40% the actual mark will be returned.

Subject to the restriction of Clause 4 below, students who have failed laboratory-

based modules will be offered a summer resit involving further practical or written

work2. In order to pass the module, any additional work undertaken must in itself

be of a sufficient standard to be awarded a pass mark and the accumulated mark

must raise the module mark to the pass threshold. The resit module mark will be

capped at 40%.

4. If a student has failed to attend at least 60% of the scheduled laboratory classes, a

summer resit will not be offered and the student will instead be required to be re-

assessed in the whole module in the following academic year; this will require the

normal expectations of attendance and the module mark will be capped at 40%.

5. Students who miss a laboratory session but who provide valid extenuating

circumstances (i.e. requests for extensions) will be given an opportunity to repeat

the session either in another scheduled class or at the end of the teaching period.

6. Some of the School’s laboratory-based modules are required modules and

students will not progress under any circumstances with fails in these modules.

In short - Attendance is compulsory; absence requires an extenuating circumstance certificate or zero will be recorded. Any student is expected to have attended and been assessed on a minimum of 8 out of 11 sessions in order to pass the module.

8

1.3 GEOGRAPHY AND MAINTENANCE OF THE LABORATORY The main laboratory suite consists of room N1.34. In addition, there are two dark rooms which are used for optics experiments and for experiments using gases or radioactive material. The far end of the laboratory is set aside for tea-time refreshments. The laboratory is maintained by technicians Mr. Nic Tripp, Ms Nadia Aoudjane, from whom you can get your laboratory diary. 1.4 ORGANISATION AND SUPERVISION OF PRACTICAL WORK The lecturer in charge of the teaching of your laboratory is the Lab Supervisor. In addition there will be 3 demonstrators who, between them, are familiar with all of the experiements you undertake. These people are there to help you, and answer any questions associated with your experiment. In addition they will assess, mark and provide feedback on your work. Learn to use them – they’re quite tame! All observations made during an experiment should be entered in your laboratory diary (available from Mr. Nic Tripp, located in the room opposite the lab entrance). Each week you will be allocated an experiment and you will normally be expected to complete this, performing appropriate calculations, drawing graphs etc. by 17:30hrs of that day. You will then be given until 16:00 hrs the following day to complete any analysis and draw conclusions on your work, ready for handing in. The hand in deadline of 16:00 on the day following your laboratory session is hard and fast – or a mark of zero will be recorded! Further details on the handing in of laboratory diaries will be given at the beginning of the session and are laid out below. At the end of a lab session you are to have your lab diary signed out by a demonstrator. This will allow us to assess how much work you have achieved during the lab session, how much finishing off work has been required and that you are employing the proper use of a Lab Diary. It is essential that you put aside about ½ hour before you come to the practical class in order to read through some of the experimental notes associated with the practical that you will be undertaking. It is anticipated that you should read any introductory section up to the experimental part itself. This will enable you to gain familiarity with the physics behind the experiment – you should not worry so much about any new lectured material but refresh your understanding from A-level and school studies. Get yourself happy with what is expected of you so you can plan your experiment, which will save you time on the day. Also you must think about the safety considerations that are required for your experimental work and write a risk assessment, which will be signed off prior to commencing any practical work. Come to the laboratory with the risk assessment and the aims already written into your diary – 2/20 marks are set aside for the completion of this aspect.

9

1.5 ASSESSMENT OF PRACTICAL WORK The responsibility for handing your work in at the correct time is yours, and failure to do so will usually mean that a mark of zero will be recorded. However any completed work will be marked for your benefit and to provide you with feedback. Exceptions to this rule will be made only for extenuating circumstances, for which you have notified the School and for which the relevant form has been submitted. In addition to your weekly lab-diary assessment, in each of the two semesters, you will be required to write up one experiment in the form of a formal report. This will be allocated by your Lab Supervisor towards the end of each semester. Formal reports should NOT be written in your lab diary but word-processed on sheets of paper that are either bound or stapled and submitted electronically through ‘TurnItIn’ on Learning Central. Marked reports will be returned to you, with feedback, and you should keep these as they should provide a basis for the reports you will have to write in subsequent years. Each experiment and each report will be marked out of 20 in accordance with the scheme: 16+ = exceptionally good, contains good physicists’ reasoning; 14+ = very good solid performance 12+ = good performance which could be improved; 10+ = competent performance but with some key omissions; 8+ = bare pass; 7- = fail. Your final module mark (see Undergraduate Handbook) will be made up as follows: Formal report 33.3%

Experimental lab diaries 66.7% (Please see sections 2 and 3 for more information regarding Assessment Criterial used in marking). While the experimental notes of all experiments and reports will be assessed weekly and individual marks logged, your total marks will normally be obtained by expressing the total marks you obtain during the session as a percentage of the total which you could have obtained during the session. Exceptions for missed work will normally be made in the cases of extenuating circumstance: absence due to illness for which a medical certificate has been supplied; absence for an extenuating, unavoidable reason for which you notified a member of staff; difficulty with an experiment for reasons which were not your responsibility and which you discussed with the demonstrator. 1.6 REFRESHMENT ARRANGEMENTS Tea, coffee, squash and chocolate, will be available in the laboratory about halfway through the afternoon and provide a mid-point break. Tea and coffee: Payment for these must be made at the beginning of the semester and will cover the whole semester. Prices will be announced at the first laboratory class. Snacks/chocolate: Payment individually at the time of purchase, but cheap .

10

2. LAB. BOOK / DIARY

2.1 RECORDING EXPERIMENTS IN YOUR LAB. BOOK / DIARY AIM: to RECORD all the results of your work; details of the experimental set up and best experimental practice; analysis of results related back to the underlying physics. It is a detailed notebook.

The aim of keeping a good laboratory diary is to record your work in a manner clear enough that you or a colleague could understand and attempt to repeat the experiment. It is a record of your observations, measurements and understanding of the experiment. It is not a neat essay containing the background theory or paragraphs copied from other sources, but a real-time account of your experiemental method and findings. When assessing your laboratory write-up, the demonstrator is interested in your measurements, observations, thinking, results and conclusions. You should aim to present to him/her a set of measurements and results taken and recorded in such a way that they can understand easily what each number means, what results you have derived, and what conclusions you have drawn. You should also make notes of any difficulties experienced and sources of uncertainty or error. Ideally the record should be such that you could yourself reconstruct the course of the experiment later - perhaps 5 years later - without difficulty. The measurements presented to the demonstrator should be those taken during the performance of the experiment they should not be rewritten

before presentation. A full written report of the historical background physics, purpose and extent of the experiment is not required with the experimental results; that task is performed once a semester when you are asked to produce a full report for a single experiement only. A successful and quality record of experimental work is within the reach of all students, providing:

1) all the measurements needed, or which you think might be needed, are made at the time the experiment is performed;

Before you begin the collection of data, decide what you are going to do and how you are going to do it. To achieve this you need to have thought about the experiment before you begin it, to try out the apparatus and perhaps to have made some trial measurements.

2) the measurements are recorded clearly and completely;

A sketch of the apparatus, or of parts of the apparatus, labelled to correspond with the measurements, often helps, and serves as a very useful reminder of the experimental arrangement. You will find the equipment that you use will have unique identification numbers; make a note of these in your lab diary as these will allow the teaching team to keep a track of acceptable results and any systematic errors.

11

Make brief, succinct notes of what you have done, rather than a long and detailed prose. Mention any specific problems and how you have overcome them. Mention good experimental practise.

Use bullet point comments rather than long prose.

Record measurements systematically and concisely and, whenever possible, tabulate them.

Always record first the actual measurements made and only then derive the values of other quantities from them eg. if you are measuring the distance between two points, record first the position of the two points against a scale and then subtract the readings and also record the result. This minizes mistakes and allows you to check results at a later date.

Record units and remember that a statement of precision is an essential part of

every measurement. A typical complete observation is = (8.69 0.01) mm.

Do not clutter the layout of measurements with arithmetic calculations - do these on a separate page or separate part of the page.

If during the experiment you make a mistake, neatly cross out the incorrect values and repeat them. NEVER rip out a page of a lab diary or completely obliterate sections (they may, on reflection later, have been right).

Whenever possible, plot graphs as the measurements are made – outlier/rogue data points can be identified readily, enabling repeat measurements to be made as required. Any trends in the data can also be identified – eg. peaks, discontinuities etc – in time for the experimenter to take more frequent/closely sampled readings to confirm the observed behaviour.

Label the axes of graphs. Choose scales for the axes which make plotting easy and, if possible, which allow the experimental precisions to be recorded sensibly. Axes do not have to start at the origin; “zoom in” sensibly to best display the results.

3) the results and conclusions are presented clearly. These in their turn will be achieved by attention to the following points.

Present the results with a statement of precision and units. Always check that the results that you have are sensible – are they “in the ball park” that you might expect? Make a sanity check - have you just predicted a speed quicker than the speed of light or a mass smaller than the lightest subatomic particle?

12

Quote the generally accepted value of the quantity you have measured, easily obtainable with a quick web-search or one of the standard books located in the labratory. Try to account for any difference that you see. (Remember to note down where you got this ‘accepted’ value from).

Comment briefly on the experiment and results, and discuss how you might extend and/or improve your experiment practise. This is important, as it demonstrates that you have both thought about and understood well what you have been doing. Note however this is not the same as self appraisal. “I think the experiment went really well” is subjective, unscientific and meaningless. Quantify your statements.

2.2 THE FEEDBACK YOU SHOULD EXPECT TO RECEIVE You will receive feedback on each of your Lab Diary submissions on a weekly basis. This feedback will be in the form of a single mark out of 20 with additional written notes to guide you on things you didn’t achieve and improvements you could consider. The demonstrators will return you work to you personally, thus giving the further opportunity for verbal feedback and for you to ask questions. At any time, you can ask the Lab Supervisor for justification of the mark awarded or where you could improve.

Bare in mind that a mark of 14/20 or better is a first-class degree performance whilst one of less than 8/20 represents a fail. Your markers will base this mark on the Decile Level Descriptors provided in Table 1. These describe how well the required task must be performed in order to obtain a certain range of marks. This method is commonly used in University assessment where there is no model answer and independence is to be encouraged.

Note that this system considers “lapses” in two types: “major” and “minor” (also considered in Table 2). It is worth paying attention to these, as marks of 70% or greater cannot be awarded in the presence of major lapses.

Your individual marks will be recorded on Learning Central for you to review. It is your responsibility to check that they have been recorded correctly and to contact the Module Organizer if that is not the case.

Formal reports are marked at the end of semester and a Report and Feedback Sheet (p.20) will be given back to you to give a thorough justification for the % mark received.

The expectation is then on YOU to read, understand and use the feedback received in order to improve your future performance.

13

Decile level descriptors

Table 1. The descriptors and descriptions used in assessing reports and diaries

Decile range

Descriptors Level Descriptions

90-100% Outstanding The assessed work is as good as could reasonably be expected from a student at this level. It is uniformly-excellent in meeting the task specifications. It contains no major lapses and very few (if any) minor lapses.

80-89% Excellent Work of very high quality, but not quite as good as could reasonably be expected from a student at this level. It is uniformly very good and sometimes excellent in meeting the task specifications. It contains no major lapses and few minor lapses.

70-79% Very good Taken as a whole the work is very good in meeting the task specification. It contains no major lapses but does contain a number of minor lapses.

60-69% Good Taken as a whole the work is good in meeting the task specifications. It may contain a small number of major and minor lapses, or no major lapses but significant minor lapses.

50-59% Satisfactory Satisfactory work taken as a whole. It is likely to show significant variability in meeting the task specifications. It is likely to contain a number of major and minor lapses.

40-49% Pass Adequate work taken as a whole. It is likely to have significant deficiencies in meeting the task specifications. It is likely that the work will reveal substantial gaps in understanding and have significant major and minor lapses.

30-39% Fail Insufficient relevant content, serious errors/omissions/lapses.

20-29% Insufficient Little relevant content, extensive errors/omissions/lapses.

10-19% Unsatisfactory Very little relevant content, extensive errors/omissions/lapses.

0-9% Poor Essentially no relevant content, extensive errors/omissions/lapses.

Some notes on the above are present on the following page.

14

Notes on decile level descriptors

Major Lapses The level descriptions above indicate that in order to award a mark greater than 70% (i.e. of 1st class standard) there should be no “major lapses”. Major lapses are therefore important in determining the mark awarded and are listed in Table 2.

Table 2 Common major lapses in diaries and reports

Diaries Reports

From task description From task description

No, or highly inappropriate, risk assessment.

Significant deviation from the format and structure explained in the support available in Learning Central.

Content is illegible (neatness per se is not a requirement).

Report is not electronically generated.

Content cannot be easily followed or understood.

Lapses that might be major depending on circumstances

Lacking clarity and succinctness. Obvious (e.g. numerical) mistakes in principle result(s).

Lacking in appropriate experimental observations.

Substantial gaps in understanding.

Lack of appropriate data analysis. Lack of proper error consideration.

Lack of appropriate error analysis.

Lack of concluding remarks.

It should be appreciated that major lapses are, in general, not a restriction of marks over and above those defined in the task*. For example, a diary with “content that is illegible” will not represent a good record of the experiment performed.

The point of including the term in the descriptors is to help authors and markers in thinking about and checking reports and diaries.

*An exception applied to diaries: “No, or highly inappropriate, risk assessment” is considered a major lapse since safety is important. However, the presence of a risk assessment is not worth (up to) 30%.

15

3. FORMAL REPORTS OF EXPERIMENTS

Towards the end of the semester students are given a free choice of experiments that are suited to being written up in a scientific report. In order to allow time for report writing “free weeks” are scheduled close to the end of term. The laboratory is open during these weeks such that the laboratory supervisor can be consulted.

The formal report for PX1123 is due in at 4pm on the Friday of week 11; the formal report for PX1223 is due in at 4pm on Monday of week 8 (11/4/15) – the first Monday following the Easter vacation.

The following sub-section contains (a lot of) information, help and advice on what is required and how to go about doing it.

3.1 THE ASSESSED TASK SPECIFICATION (FORMAL REPORT)

To write a basic scientific report on one of the experiments performed in the semester following the format and structure explained in the support available in Learning Central. In particular, the report should be computer generated and must include: an appropriate line diagram of the apparatus; an appropriate number of equations, at least one graph and a number of appropriate references (minimum: one each of text book; laboratory book and web page). The report should not contain scanned images. Analysis is not expected to go beyond that indicated in the laboratory book but a consideration of random and systematic errors is expected.

There is no strict word requirement or limit but students are advised that ~2000 words is usually appropriate and are asked to quote their word count on the front page. 3.2 LEARNING CENTRAL SUPPORT ON SCIENTIFIC REPORT WRITING Guidance and support for report writing, including MS Office skills, is given in Learning Central – PHYSX General Support module. This contains example reports, explanatory screencasts (videos) and documents used by undergraduates of all years.

The expected format, structure and required depth is very similar to that in the short example report. Students are strongly advised to use this as a template. However do note that the short example report is based on an experiment with one part whereas some of the experiments (e.g. A2, M5) have two parts. To see how to handle this take a quick look at the structure of the long example report. This support is intended for years 0, 1 and 2 therefore students are advised to start with the overview and then as a minimum read the short report and watch the two screencasts (videos) “Basics-” and “More on Scientific reports”. Watch the MS Office screencasts as required in order to produce the required line diagram (Powerpoint), equations (equation editor in Word) and graphs (Excel).

3.3 THE ASSESSMENT CRITERIA The primary tool for assessing reports is the “Report Mark and Feedback Sheet” (reproduced in 3.6). This breaks down marks between different aspects of the report (some general, some specific content) in a way that reflects the learning outcomes for the module and the task described above. It also naturally provides feedback to students on

16

the relative success of different aspects of their reports. As the construction of the form was based on the task specification it can be used independently of it.

The marks awarded in the above form are guided by the “decile level descriptors” (given in Table 1, section 2.2). These are general (the same ones are used for judging diaries) and describe how well the required task must have been performed in order to obtain a certain range of marks. Therefore they must be used in conjunction with the task specification.

Note that this system considers lapses in two types: “major” and “minor” (also in section 2.2). It is worth paying attention to these as marks 70% or greater cannot be awarded in the presence of major lapses.

3.4 THE PROCESS OF REPORT SUBMISSION AND ASSESSMENT

Reports are submitted via Turnitin whose plagiarism* checking is later supplemented by that of the markers.

Reports are assigned to the 3 lab supervisors to mark.

Markers read and annotate the scripts, fill in the “Report Mark and Feedback Sheet” and decide on marks for each section.

Markers then perform a reality check on the mark; again by comparing their view of the report against the decile level descriptors, before applying any necessary adjustments. This check is designed to pick up double awards/penalties that can occur when using mark sheets (due to the sections not being entirely independent).

When all reports have been marked the Module Organizer and all Lab Supervisors meet and moderate the marks: by comparing the averages for different markers and experiments and second marking a selection of reports.

* Check your student handbook for guidance: although data analysis can be done as pairs it is advisable to not exchange reports once the writing process begins. 3.5 ADVICE ON REPORT WRITING AIM: to PRESENT the results of your work The person marking your full report is interested in your description of the experiment. They are not concerned with the actual measurements or quality of the results but are concerned with the way these are presented in the report. You should aim to present a clear, concise, report of the experiment you have performed, at a level able to be understood by a fellow 1st Year student, who does not have expert knowledge of your experiment. An example of a full report and further advice are given in section III.4. Very importantly, your report must be original and not a copy of any part of the notes provided with the experiment. It should be a report of what you did; not of what you would like to have done or of what you think you should have done. That said, credit will be given for discussions on how one might extend and improve an experiment. It is normal practise in writing scientific papers to omit all details of calculations, and you should also do this. Providing your report includes a statement of the basic theory which you used, including equations, together with a record of your experimental observations (summarized if appropriate) and the parameters which you obtain as a result of your

17

calculations, it will be possible for anyone who so wishes to check the calculations you perform. The principles of report writing are simple: give the report a sensible structure; write in proper, concise English; use the past tense passive voice, for example "... the potentiometer was balanced ...". The following structure is suggested. It is not mandatory, but you are strongly recommended to adopt it.

1) Follow the title with an abstract. Head this section “Abstract".

An abstract is a very brief (~50-100 words) synopsis of the experiment performed. An example is "The speed of sound in a gas has been measured using the standing wave cavity method for one gas (air) for a range of temperatures near room temperature and for gases of different molecular weights (air, argon, carbon dioxide) at room temperature. The speed in air near room temperature was found to be

proportional to T½, where T is the gas temperature in Kelvin, and the ratio Cp/Cv for

air, argon and carbon dioxide at room temperature was found to be 1.402 ± 0.003, 1.668 ± 0.003 and 1.300 ± 0.003 respectively".

2) Follow the abstract, on a separate page, with an introduction to the

experiment. Head this section “Introduction”.

Here, you should state the purpose of the experiment, and outline the principles upon which it was based. This section is often the most difficult to write. On many occasions it is convenient to draft all the rest of the report and write this last. Remember that the reader will, in general, not be as familiar with the subject matter as the author. Start with a brief general survey of the particular area of physics under investigation before plunging into details of the work performed.

Important formulae and equations to be used later in the report can often, with advantage, be mentioned in the introduction as, by showing what quantities are to be measured, their presence helps in the understanding of the experiment. Formulae or equations should only be quoted at this stage. Derivations of formulae or equations should be given either by references to sources, for example text books, or in full in appendices. References should be given in the way described below. Remember (look at a text book) that parameters/variable in equations and in text use italics.

3) Follow this with a description of the experimental procedure. Head this “Experimental Procedure”.

Write the experimental procedure as concisely as possible: give only the essentials, but do mention any difficulties you experienced and how they were overcome. Often a well-formed diagram of apparatus can convey most of the information. It is often convenient to divide the description of the experimental procedure into sections, each one dealing with the measurement of one quantity. If

18

the introduction to the experiment has been well designed this division will occur naturally. Relegate any matters which can be treated separately, such as proofs of formulae, to numbered appendices. Give references in the way described below.

All diagrams, graphs or figures should be labelled as figures. Give each a consecutive number (as in Figure 1 etc.), a brief title and, where possible, a caption of explanation. Give each group or table of measurements a number (as in Table 1 etc.) and ensure each one has a brief title. Use the numbers for reference from the text e.g. “the data in Figure 1 exhibits a straight . . .” – if a figure isn’t referred to within the main text, ask yourself, why is it there?

4) Follow this section with the results of the experiment, discussion of them and comments. Head this “Results and discussion”.

The result of the experiment can be stated quite briefly as "The value of X

obtained was N + (N) UNITS". For example "The viscosity of water at 20°C was

found to be (1.002 0.001) x 10-3 N m-2 s-1.

Discussion of the result, or of measurements, method etc. is important here. Think about the physcis of the experiment and what has been proven or achieved – make use of cross-referencing by quoting the figure, table or report section numbers.

5) Follow this section with your conclusions. Head this “Conclusions”.

The conclusions should restate, concisely, what you have achieved including the results and associated uncertainties. Point the way forward for how you believe the experiment could be taken forward.

6) Follow this section with references. Head this “References” or “Bibliography”.

The last section of the main body of the report is the bibliography, or list of references. It is essential to provide references. There are two main styles used (along with many subtle variations) to detail references. In the Harvard method, the name of the first author along with the year of publication is inserted in the text, with full details given, in alphabetical order, at the end of the document. The second style favoured here is known as the Vancouver approach, and is slightly different. At the point in your report at which you wish to make the reference, insert a number in square brackets, e.g. [1]. Numbers should start with [1] and be in the order in which they appear in the report. References should be given in the reference or bibliography section, and should be listed in the order in which they appear in the report.

19

Where referencing a book, give the author list, title, publisher, place published, year and if relevant, page number eg. [1] H.D. Young, R.A. Freedman, University Physics, Pearson, San Francisco, 2004. In the case of a journal paper, give the author list, title of article, journal title, vol no., page no.s, year. e.g. [2] M.S. Bigelow, N.N. Lepeshkin & R.W. Boyd, “Ultra-slow and superluminal light propagation in solids at room temperature”, Journal of Physics: Condensed Matter, 16, pp.1321-1340, 2004. In the case of a webpage (note: use webpages carefully as information is sometimes incorrect), give title, institution responsible, web address, and very importantly the date on which the website was accessed eg. [3] “How Hearing Works”, HowStuffWorks inc., http://science.howstuffworks.com/hearing.htm, accessed 13th July 2008 6) Follow this section with any appendices. Head this “Appendices”.

Use the appendices to treat matters of detail which are not essential to the main part of the report, but that help to clarify or expand on points made. Give each appendix a different number to help cross referencing from other parts of the report and note that to be useful appendices must be mentioned in the main body of the report. It is not expected that complete raw data sets are contained within the Appendices. Use them only when necessary.

Health Warning: In subsequent years it may be necessary to develop this standard report layout to deal with complex experiments or series of experiments.

3.6 THE FEEDBACK YOU SHOULD EXPECT TO RECEIVE ON YOUR REPORT

The following page shows the feedback sheet that will be given back to you with your marked reports – it also shows the mark scheme. Here the sections where markers provide feedback have been populated with selected advice/explanations. Do not take this to represent the required report structure.

20

Report Mark and Feedback Sheet 2015/16 Student Name Report Title Marker initials

Section Comments Mark

General formatting requirements

There are too many points to mention all here: consult report writing section and LC. Check quality of sectioning, figure headings, equations, grammar & spelling.

/15

Abstract

A stand-alone, single paragraph summary of the experiment: what you did, why you did it, the important results and what they mean. Like the aims/outline and conclusions this is written or re-visited towards the end of the writing process.

/15

Introduction

Background information and required theory – This should provide context and relevant theory (i.e. used later in analysing and understanding results). Aims/outline –Identify specific aims and outline how they were achieved. At this level the aims are probably to be found in the lab script. A common mistake is to make the aims too general (e.g. “to understand magnetism”). Aims are often refined at the end of the writing process.

/20

Experimental (apparatus, method, results, discussion)

Do split “experimental” into appropriate sections. Apparatus and methodology – This must include the important experimental parameters. General methodology can go with the apparatus; methodology specific to each experiment can go in with the individual experiment sections. Results/Analysis/Discussion. In appropriate experimental sections present and describe the results and discuss analysis. A final “discussion” section may be required to: bring information together (i.e. demonstrate synthesis); to compare and extract more meaning. Depending on the nature of the previous sections it may be long, short or not required at all. Don’t forget error discussion!

/40

Conclusions and references

Conclusions – A non-stand-alone summary of the experiment. Like the aims/outline and abstract this is either written towards the end of the writing process or re-visited. It should contain no new information. References – Minimum requirement: one each of text book; lab book and web reference.

/10

Overall assessment as a piece of scientific communication FINAL MARK

Assessed on how well the stated task has been executed as a whole, e.g. whether it is coherent. Includes a statement of the level in terms of the decile level descriptors.

Total (%)

21

4. SAFETY IN THE LABORATORY The 1974 Health and Safety at Work Act places, on all workers, the legal obligation to guard themselves and others against hazards arising from their work. This act applies to students and teachers in university laboratories. Maintaining a safe working environment in the laboratory is paramount. The following points supplement those contained in "School of Physics Safety Regulations for Undergraduates", a copy of which was given to you when you registered in the School. 1. It is your responsibility to ensure that at all times you work in such a way as to

ensure your own safety and that of other persons in the laboratory. 2. The treatment of serious injuries must take precedence over all other action

including the containment or cleaning up of radioactive contamination. 3. None of the experiments in the laboratory is dangerous provided that normal

practices are followed. However, particular care should be exercised in those experiments involving cryogenic fluids, lasers, gases and radioactive materials. Relevant safety information will be found in the scripts for these experiments.

4. If you are uncertain about any safety matter for any of the experiments, you

MUST consult a demonstrator. 5. All accidents must be reported to a laboratory supervisor or technician who will

take the necessary action. 6. After an accident a report form, which can be obtained from the technician, must

be completed and given to the laboratory supervisor. 7. Please alert your Laboratory Supervisor of any medical condition (for e.g. having

a pacemaker) which may affect your ability to perform certain experiments. 4.1 UNDERGRADUATE EXPERIMENT RISK ASSESSMENT The experiments you will perform in the first year Physics Laboratory are relatively free of danger to health and safety. Nevertheless, an important element of your training in laboratory work will be to introduce you to the need to assess carefully any risks associated with a given experimental situation. As an aid towards this end, a sheet entitled Code of Practice for Teaching Laboratories follows. At the commencement of each experiment, you are asked to use the material on this sheet to arrive at a risk assessment of the experiment you are about to perform. A statement (which may, in some cases, be brief) of any risk(s) you perceive in the work should be recorded as an additional item in your laboratory diary account of the experiment.

22

4.2 SCHOOL OF PHYSICS & ASTRONOMY: CODE OF PRACTICE FOR TEACHING LABORATORIES

Electricity Supplies to circuits using voltages greater than 25V ac or 60V dc

should be "hardwired" via plugs and sockets. Supplies of 25Vac, 60V dc or less should be connected using 4 mm plugs and insulated leads, the only exceptions being"breadboards". It is forbidden to open 13 A plugs.

Chemicals Before handling chemicals, the relevant Chemical Risk Assessment

forms must be obtained and read carefully. Radioactive Gloves must be worn and tweezers used when handling. Sources Lasers Never look directly into a laser beam. Experiments should be

arranged to minimise reflected beams. X-Rays The X-ray generators in the teaching laboratories are inherently

safe, but the safety procedures given must be strictly followed. Waste Disposal "Sharps", ie, hypodermic needles, broken glass and sharp metal

pieces should be put in the yellow containers provided. Photographic chemicals may be washed down the drain with plenty of water. Other chemicals should be given to the Technician or Demonstrator for disposal.

Liquid Nitrogen Great care should be taken when using as contact with skin can cause

"cold burns". Goggles and gloves must be worn when pouring. Natural Gas Only approved apparatus can be connected to the gas supplies and

these should be turned off when not in use. Compressed Air This can be dangerous if mis-handled and should be used with care.

Any flexible tubing connected must be secured to stop it moving when the supply is turned on.

Gas Cylinders Must be properly secured by clamping to a bench or placed in

cylinder stands. The correct regulators must be fitted. Machines When using machines, eg, lathe and drill, eye protection must be

worn and guards in place. Long hair and loose clothing especially ties should be secured so that they cannot be caught in rotating parts. Machines can only be used under supervision.

Hand Tools Care should be taken when using tools and hands kept away from

the cutting edges. Hot Plates Can cause burns. The temperature should be checked before

handling. Ultrasonic Baths Avoid direct bodily contact with the bath when in operation.

Vacuum If glassware is evacuated, implosion guarding must be used in Equipment order to contain the glass in the event of an accident.

23

II: EXPERIMENTS TIME TABLE AND LIST OF EXPERIMENTS

Week Experi- Title Page ment Autumn Semester (PX1123)

1 1 Introductory Exercises . Straight line graphs, errors and how to combine them.

24

2-3 2 Group Experiment: Young’s Modulus. 26 3 Group Experiment: Coefficients of Friction. 28

4 – 9

(see list) 4 5 6 7 8 9

Statistics of Experimental Data (Gaussian Distribution). Optics with Thin Lenses. Introduction to Multimeters and Oscilloscopes. Magnetic Fields and Electric Currents. Radioactivity. Rotational Motion and Moment of Inertia.

32 37 45 55 62 68

10 10 Group Christmas challenge! 74 11 11

Formal Report writing – no experiments. 75

Spring Semester (PX1223)

1 12 Report Writing, Feedback & Reflection session.

76

2 – 7 (see list)

8

13 14 15 16 17 18

19

Optical Diffraction. Propogation of Sound in Gases. Measuring e/m for the electron. Variation of Resistance with Temperature. Resistive and Reactive Impedances in RC Circuits. Microwaves Group Easter Challenge!

77 81 84 88 92 xxx 100

8 – 10 (see list)

11

30 21

Air resistance. Computer simulations and analysis. Formal Report writing – no experiments.

101 109

24

CHECKLIST BEFORE LAB SESSION

Read through the notes on the experiment that you will be doing BEFORE coming to the practical class. You will be expected to have read all the introductory notes and refreshed yourself of any knowledge of the subject taught in school

Think about the safety considerations that there might be associated with the practical, having read through the lab notes. Write a Risk Assessment before coming into the lab, to be discussed with your demonstrator at the start of the session.

Read carefully through any additional sections that might be useful in Section III – eg. use of electronic equipment, statistics., and also the diary checklist given at the end of this manual.

Write an Aims statement before coming into the lab, so that you understand the basics of what you are about to peform.

DURING THE LAB SESSION

On turning up to the lab, listen carefully to any briefing that is given by your demonstrator: he/she will give you tips on how to do the experiment as well as detailing any safety considerations relevant to your experiment. Amend your risk assessment, if required.

Check that the size of any quantities that you have been asked to derive/calculate are sensible - ie. are they the right order of magnitude?

Read through your account of your experiment before handing it in, checking that you have included errors/error calculations, that you are quoting numbers to the correct number of significant figures and that you have included units.

Staple/attach any loose paper (eg. graphs, computer print-outs, questionnaires etc.) into your lab book.

25

Exercise 1: Interpreting data and stating errors 1. A series of experimental results is given below. In each case the mean value of the

experimentally determined variable is given, together with the error.

(a) R = 0.732 Δ(R) = 0.003

(b) C = 9.993 F Δ(C) = 0.018 F

(c) T1/2= 2.354 min Δ(T1/2 ) = 11 sec

(d) R = 2.436 M Δ(R) = 23

(e) Wc = 11.562935 KHz Δ(Wc) = 3.1 Hz

(f) d= 62165.551 m Δ(d) = 26 cm

(g) f = 20 cm Δ(f) = 0.03 cm

For each quantity, using SI units, write down the best final statement of the result of each experimental determination. Pay attention to orders of magnitude and numbers of decimal places. For example, if we measure a voltage of 5.56V, but the error is 0.5V, then the result is best quoted as, V=(5.6 ± 0.5)V

2. In the following questions the values of Z1, Z2 . . . are the given functions of the

independently measured quantities A, B and C. Calculate the values of, and errors in, Z1, Z2 etc from the given values of, and errors in, A, B and C. Then state the final

result.

(a) Zl = C/A A = 100 Δ(A) = 0.1

(b) Z2 = A-B B = 0.1 Δ(B) = 0.005

(c) Z3 = 2AB2/C C = 50 Δ(C) = 2

(d) Z4 = B loge C

(e) Z5 = A sin(C), where C above is expressed in degrees – think about how you

express the error! Refer to section III.2 for guidance on how to combine errors, using the method of partial differentiation. We will explain this to you, but check you understand – this is very important!

26

3.. The variation of resistance, R, of a length of copper wire with temperature, T, is given

by: R = Ro (1 + T)

where Ro and are constants.

Experimental data from a particular investigation are given in Table 1.3.

T(K) R() T(K) R() 300 2415 420 2820 320 2490 440 2910 340 2585 460 3050 360 2625 480 3030 380 2710 500 3115 400 2755 520 3155

Table 1.3: Data for question 3

a) Which are the dependent and independent variables? b) Plot a graph to show the variation of R with T. c) Determine Ro and estimate the likely error.

d) Determine and estimate the likely error.

4. In one 1st Year experiment, measurements are made of the velocity of sound in a gas,

c. This can be related to , the ratio of the principal specific heats of the gas by

� =���

�� ,

where m is the mass of one molecule of gas, k is the Boltzmann constant and T is the absolute temperature. Determine a value for (with error) from the following data which was obtained from an experiment with nitrogen:

c = (344 20) ms-1; T = (292 1) K

27

Experiment 2: Measuring Young’s Modulus Note: This experiment is carried out in pairs.

Outline Most students will be familiar with the concept of Young's Modulus from A level studies. It is an extremely important characteristic of a material and is the numerical evaluation of Hooke's Law, namely the ratio of stress to strain (the measure of resistance to elastic deformation). You will design a basic experiment to verify Hooke’s law and determine Young’s Modulus for a bar of wood.

Experimental skills Making and recording basic measurements of lengths, distances (and their

uncertainties/errors). Making use of repetitive measurements to improve error. Careful experimental observation and recording of results.

Wider Applications Young Modulus, E, is a material property that describes its stiffness and is therefore one

of the most important properties in engineering design. Young's modulus is not always the same in all orientations of a material. Most metals

and ceramics are isotropic, and their mechanical properties are the same in all orientations. However anisotropy can be seen in some treated metals, many composite materials, wood and reinfoirced concrete. Engineers can use this directional phenomenon to their advantage in creating structures.

Young's modulus is the most common elastic modulus used, but there are other elastic moduli measured too, such as the bulk modulus and the shear modulus.

1. Introduction The relation between the depression produced at the end of a horizontal weightless rule by application of a vertical force F, as represented in Figure 1.1, is given by equation 1:

� =���

����, [1]

where L is the projecting length, E is Young's modulus for the material of the rule and Ia

is the geometrical moment of inertia of cross section. For the rectangularly-sectioned rule provided, which has width a and thickness b,

�� =���

��, [2]

28

Figure 1.1: Representation of the deflection of horizontal rule by force, F

2. Experiment Clamp the metre rule to the bench so that part of its length projects horizontally

beyond the bench edge. Make suitable measurements to explore the validity of equation [1] and to measure E

for wood.

Reminder: Concluding remarks Note: This reminder and the advice below are given since this is an early experiment - do not expect to see such prompts in the future.

Summarise the main numerical findings (as always with errors), important observations and what is understood and not understood at this time.

29

Experiment 3: Coefficients of Friction Note: This experiment is carried out in pairs.

Outline Most students are probably familiar with the mathematics of friction as applied to static and moving bodies on the flat and on slopes. In this experiment the behaviour of a real (if a little contrived) system of a short length of dowel travelling down a slope of variable angle is investigated. Experience indicates that the system can behave unusually, requiring the experimentalist to take data reproducibly and carefully note down their observations.

Experimental skills Making and recording basic measurements: angles and times (and their errors). Making use of trial/survey experiments. Careful experimental observation and systematic approach to data taking.

Wider Applications Funny thing friction, sometimes you want it, sometimes you don’t; the rotation of

wheels on a car should be as frictionless as possible, but friction between tyres and the road is absolutely essential.

The difference between coefficient of friction in the limiting and kinetic cases leads to “stick-slip” effects, where systems once they start moving move quickly, e.g. in hydraulic cylinders and earthquakes.

1. Introduction The motion of a body down a slope is a classic mechanics problem. In elementary texts two types of systems are considered; zero and non-zero friction. The friction between two surfaces is characterised by a dimensionless constant called the coefficient of friction, μ and can often be related to the frictional force FF by

NF FF , [1]

where FN is the normal or reaction force between the body and the surface. Two types are considered: limiting (or static) friction (μL) that prevents a static body from beginning to move; and kinetic friction (μK) that acts on moving bodies. Usually μK is thought to be slightly lower than (μL) but near enough so that they are considered equal in calculations. This is illustrated in Figure 1.2, for a body initially at rest on a surface and subject to a driving force that increases with time. The frictional force increases and matches the driving force until the limiting condition is met, then the body starts to move and the kinetic friction, which is slightly less than the limiting friction, operates always in the opposite direction to that of the motion.

30

time

LFN

KFN

Frictionforce

Nomotion

motion

Figure 1.2. The frictional force acting on a body as the driving force is increased from zero.

1.1 Body on a slope

A body on a slope is an interesting system as there is no need to introduce external forces in order to observe the effects of friction. In the following discussion, the angle of the slope to the horizontal is given by θ, the mass by m and the acceleration due to gravity by g.

Figure 1.3. Forces acting on a body on a slope. The weight of the body can be resolved perpendicular and parallel to the slope. The perpendicular component is exactly balanced by a reaction force, FN.

As the angle of the slope increases the force on the body due to gravity acting down the slope, Fs increases as

sinmgFS [2]

At the same time the reaction force decreases as

cosmgFN [3]

This is important because, from equation 1, the reaction force determines the frictional forces.

θmg

FN=mg.cosθ

FS=mg.sinθ

FF

mg.cosθ

In your experiment this is the wooden dowel

31

The critical angle, θC

With no external forces acting the frictional force always acts up the slope and a critical angle, θc can be defined at which the forces down and up the slope are identical and beyond which the body starts to move down the slope. At the critical angle

CLC cosmgsinmg or LCtan [4]

Therefore a simple experiment of the angle at which the body starts to move reveals μL.

Angles greater than the critical angle

Since in this regime the body is moving, it is the coefficient of kinetic friction that applies. Now although there is an imbalance between the forces and the overall acceleration, the acceleration, a, down the slope is given by:

)cos(singcosgsinga KK [5]

Since this acceleration is constant (in ideal conditions) the familiar equations of motion can be used. For example, the time, t a body starting from rest takes to move down a slope of length, s is given by

250 at.s [6]

2. Experiment

2.1 Apparatus The simple apparatus used here consists of a channel, a stand to support it, a length of dowel and a stop watch. The arrangement of the support and channel should be as follows: The support should be placed on the upper bench and the bottom of the channel on the

lower bench. The channel should be supported so that it is “L” shaped, with a slight angle so that

the dowel remains close to the upright. (A “V” shaped arrangement should not be used as it has been found that the dowel becomes easily wedged).

Running the forks on the support through the holes in the channel ~30 cm from the top of the channel seems a secure, stable and convenient method.

Note: The maximum angle of the slope permitted in this experiment is 30o.

2.2 Part 1. Survey/trial experiments (including timing errors)

Survey (or trial) experiments are a vital part of performing any new procedure; they are used to get a feel for the behaviour of the system, to determine the most appropriate methodology, to understand the important measuring ranges etc. In many first year experiments, these trials are hidden from the students, in order to make best use of the available time and apparatus. Nonetheless they will have been carried out by demonstrators and supervisors in order to generate the lab scripts.

Therefore, this part of the experiment is being used as an opportunity to take students through the surveying process. So, spend ~10 minutes “playing” with the equipment and making a note your

observations and some measurements if appropriate. Pick suitable conditions to perform a study of the reproducibility of “your” timing.

Note that this is not as easy as it sounds since an aim is to be able to later distinguish between your timing error and real variations within the experiment.

32

2.3 Part 2. Determine the coefficient of limiting friction, μL. Use the experience you have gained to design and perform an experiment to determine

μL.

Your diary entry will need to describe your methodology and how the error was determined and what you think it corresponds to.

2.4 Part 3. Determine the coefficient of kinetic friction, μK. Use the experience you have gained to design and perform experiments to determine

μK, exploring angles between θC and 30o. There are no obvious straight line graphs here, instead it is suggested that a graph of

μK against angle is plotted.

Reminder: Concluding remarks Note: This reminder and the advice below are given since this is an early experiment - do not expect to see such prompts in the future.

Summarise the main numerical findings (as always with errors), important observations and what is understood and not understood at this time.

33

Experiment 4: The statistics of experimental data; the Gaussian distribution. Outline The statistical nature of measured data is examined using an experiment in which ball bearings are randomly deflected as they roll down an incline. Random behaviour is expected to result in a “Gaussian” distribution, the most common mathematical distribution in experimental physics. The experiment dwells on the progression from small to large data sets, the emergence of the well known shape of the distribution and the implications for data analysis and error estimation (i.e. the relationship to “accuracy and precision” and “random and systematic errors”).

Experimental skills Statistical analysis of data in general. Analysis using the Gaussian distribution in particular.

Wider Applications This experiment illustrates the unseen statistics behind all practical physics: When dealing with a small number (say ~ 12) data points, as you often do in these

laboratory experiments, it should always be remembered that the measurements represent “samples” of an underlying data “distribution”.

The majority of physics experiments result in underlying data distributions that are Gaussian.

Other important distributions include Poisson, Lorentzian and Binomial. The distribution is governed by the underlying physics and/or statistics.

1. Introduction Virtually all experiments are influenced by statistical considerations and have underlying distributions of various types. However in most cases either not enough data is collected or the data is not analysed in such a way as to reveal this fact. Consequently it is entirely possible to perform crude but quite reasonable data analysis with little understanding of its context. Clearly the training of physicists should progress them beyond such a superficial level. This experiment is a very important role in training by taking you through the techniques used when dealing with small, medium and large sets of data.

The experimental set up chosen uses random processes to produce a distribution that consequently should be Gaussian and is appropriate here since most experiments produce such distributions. What is rare is the opportunity for students to observe the emergence of a distribution and consider the effect on data and error analysis.

Ultimately though, always remember that the concern of an experiment is to express a measurement as “(value +/- error) units”. Statistics is simply the tool by which the “value” and the “error” are determined. Reminder: Systematic errors - the result of a defect either in the apparatus or experimental

procedure leading to a (usually) constant error throughout a set of readings. Random errors - the result of a lack of consistency in either in the apparatus or

experimental procedure leading to a distribution of results. Accuracy - determined by how close the measured is to the true value, in other words

how correct the measurement is. A value can only be accurate if the systematic error is small.

34

Precision - determined by how “exactly” a measurement can be made regardless of its accuracy. Precision relates directly to the random error - a value can only be precise if the random error is small (high precision means low random error, low precision means high random error).

1.1. Simple statistical concepts In all the experiments a series of values xl, x2 .... xn is obtained. Often the experimental values differ, mainly due to the fact that some variable in the experiment has been changed (usually the aim would then be to plot the data on a straight line graph). In this discussion and the experiments that follow, the measurements recorded will be of nominally the same value. They actual measurements will represent a sample of all the possible measurements and these differences are due to variations in the system being measured, the equipment used for measuring, or the operator.

From such measurements (taking xi as the ith value of x and n as the total number of

measurements) a number of statistical values can be found that are of relevance to the understanding of the experiment:

Arithmetic mean μ i

n

i

xn

1

1 [1]

The arithmetic mean has a special significance as this represents the best estimate of the “true value” of the measurement. The error in an experiment can then be understood to reflect the possible discrepancy between the arithmetic mean and the true value. Superficially and practically for small n an estimate of (twice) the error might involve:

Data range xmax - x min Probable error the range in which 50% of the values fall With larger n (a larger sample) formal statistical terms such as “standard deviation” become appropriate. The standard deviation, σ(x) of an experiment is a value that reflects the inherent dispersion or spread of the data (an experiment with high precision will have a low standard deviation) and so is, like the “true value” an unattainable idealised parameter. Practically, the available sample can be used to obtain a “sample standard deviation”, σn(x) (the equivalent of finding the arithmetic mean of the measurements) and this can be modified to give the “best estimate of the standard deviation”, sn(x):

sample standard deviation

212

1

1

)x(

n)x(n i

n

i

[2]

best estimate of the standard deviation )(1

)(2/1

xn

nxs nn

[3]

Whilst standard deviations are related to errors and may be reasonable to use in some circumstances they are not appropriate when there are a large number of measurements and the distribution is well defined (see below for more on distributions). Here the accepted error is the (best estimate of the) standard error:

Best estimate of standard error 2121 1

n

)x(

n

)x(s)x( nn

n

[4]

Note: All of the above values can be found without reference to the particular distribution of the data.

35

1.2. Distributions If measurements occur in discrete values (as they will in the following experiments) the distribution can be drawn by plotting the number of times (frequency) a value is recorded versus the value itself. (If the measurements are continuous then the values can be split up into data ranges (eg x to x + dx) and then the frequency counted.)

However, the frequency of occurrence clearly depends on the number of attempts which are made. A more fundamental property is the probability which experimentally is given by probability, P = number of occurrences [5] total number of events, n

It should be clear from this that the sums of probabilities should equal one. The mathematical functions that describe distributions are always probability functions.

1.3 The Gaussian (or Normal) distribution All experimental results are affected by random errors. In practice it turns out that in many cases the distribution function which best describes these random errors is the Gaussian distribution given by:

2

2

21 2

1

2

1

)x( exp .

)(

)x(P [6]

where μ is the mean value of x and is the standard deviation. An example of a Gaussian distribution is shown in figure 1; it is symmetrical about the mean has a characteristic bell shape and ~68% of the measured values are expected within ± 1σ of the mean (this range is slightly larger than that covered by the “probable error”).

Figure 1 Gaussian probability function generated using 0nx and σ(x) = 1 resulting in the

x-axis being in units of standard deviation. The FWHM is wider than 2σ(x).

FWHM

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

-4 -3 -2 -1 0 1 2 3 4

x

P(x

)

FWHM

36

2. Experimental 2.1 Apparatus The apparatus used here consists of a pin board, down which steel balls are rolled

individually (so that they do not interfere with each other). There is a row of 23 “bins” at the base numbered from -11 to 0 to +11 (the discrete values representing the results of this experiment).

The pins are intended to induce a random motion of the balls so that the balls have a distribution about their “true value” that is Gaussian.

The design is such that the true value (ideal result) of the experiment is zero. However, various biases can be imagined that might affect this and lead to a systematic error (overall bias) that will be constant provided the equipment is not disturbed.

Approximately 50 balls are supplied and these constitute a “batch”.

2.2 Procedure Although split into two parts it should be considered as a single continuous experiment in which the number of trials, n, increases. In order to be able to monitor the “result”, and the emerging Gaussian distribution, it is necessary to keep track of the results in the order in which they are obtained. It would be impractical to note the result in order for every ball (trial) however it is really only necessary to pay close attention to the first few trials. The first part of the experiment pays close attention to the “first batch” of ~50 trials. In the second part a further 4 batches are recorded and allow the accumulation of a

large data set. The total number of trials is then ~250.

2.2.1 Small-medium number statistics (n = 1 to ~50) Note: In order to mimic the low n experiments that students usually perform the first batch must be undertaken in stages; this ensures that unprejudiced decisions about errors are made at each stage. Note: it will be very easy for diaries to become unintelligible whilst working through this section - use headings, notes and comments to avoid this.

(i) First roll one ball down the slope and note its position. Clearly this “measurements is our current best estimate of the “true value”. What is the “result” of the experiment at this stage (i.e. value +/- error)? Is it in fact

possible to estimate an error (note - it must be non zero) at this stage? If it is not possible then what are the implications for deciding on the size of the error bars that are often drawn on graphs based on single measurements?

(ii) Roll another two balls down the slope (total = 3) and note their positions The best estimate of the “true value” is now the average of three measurements

(relevance: e.g. timing experiments are often performed three times). Realistically the estimated error here is obtained from the data range. Write down the result of the experiment at this stage (value +/- error).

Remember each trial should be performed identically - you should be aware of and write down the details of the procedure at this point. It would be entirely reasonable to change (improve) the methodology. This would entail repeating the first three trials (for consistency later) and the diary entry should be clear.

(iii) Roll a further nine balls down the slope (total = 12) and note their positions

The best estimate of the “true value” is now the average/mean of a total of twelve measurements (relevance: experiments in which straight line graphs are generated often have approximately this number of data points).

37

The estimated error. With 12 measurements simply using the data range to obtain an error value ought to be too pessimistic and statistical techniques can start to be used (even though there are not enough data values for the shape of the distribution to have emerged). Calculate and compare values for (0.5 x) range, the probable error, standard deviations and standard error described above.

(Note: the above calculations can be performed using the statistical functions of a calculator. This will save time later, but at this point students must confirm that the correct method is being used by showing hand working and comparing with calculator + statistical functions).

(iv) Roll the remainder of the batch down the slope and note their positions in order. For totals of 24 and ~50 trials calculate and compare values for (0.5 x) range, the

probable error, standard deviations and standard error. Use the values for n = 50 to draw a histogram and compare with shape of the Gaussian

distribution shown in figure 1. How well defined is the Gaussian distribution?

2.2.2 Large number statistics (n up to ~250) In order to be able to monitor the further development of the experimental “result” and the data distribution a further 4 batches of balls will be used. It would be impractical to note the result in order for every ball (trial), instead send the balls down in batches (of ~50) recording the distribution for each batch. Draw a suitable table in which to record the measurements. Perform and record the measurements.

Data distribution Draw a second table in which to record the calculated cumulative distributions for the

total of 1 (from section 2.2.1) 3 and 5 batches of measurements. For each case calculate the mean and sample/best estimate of the standard deviation

and standard error. Use the values for n ~ 250 and equation 6 to calculate the corresponding Gaussian

distribution and plot this on top of the measured distribution. Comment on the agreement between them.

2.3 Analysis of the “result” of the experiment as a function of n This section considers all of the results obtained. Consider (giving an explanation/justification) what is the most appropriate error value

to use for n = 3, 12, 24, 50 150 and 350. One decision here is; at what n does it become appropriate to use standard error?

Summarise the above in a table with columns for “value”, “most appropriate error value” and “error type” (e.g. range, standard error etc).

Plot a graph(s) of mean value, μ against n (for n = 3, 12, 24, 50, 150, and 250) using the chosen error for the error bar.

Finally, for the concluding remarks and drawing on the previous graph, summarise what has been learnt about the systematic and random errors and accuracy and precision of the experiment as n was increased. Is there any evidence for a bias (systematic error) in the experimental set up? (Note: Just in case you’ve missed it so far - the mean value alone provides no evidence for a bias (systematic error) it must be considered with an appropriate error).

38

Experiment 5: Geometric optics, imaging with thin convex lenses

Safety The light source used is a relatively low power 40 W incandescent bulb. However, in

using lenses the light may be focused to produce high power densities with potential to damage the eye. Therefore never look through lenses towards the light source.

The light bulb is contained and shielded within a black housing which will become hot after extended use. Therefore take care not to touch the housing.

The lenses are made from glass and may break if dropped. If this occurs do not attempt to clean up, instead call the demonstrator, supervisors or lab technician.

1. Simple Overview

This is a simple experiment designed to familiarise you with basic optical equipment and a common sense approach to setting up optical systems. You will learn about some basic properties of thin bi-convex spherical glass lenses, the key property of which is the focal length of the lens. If parallel wave-fronts of light are incident on a thin lens, to a first approximation the light is focussed by the lens to a point. This is known as the focal length (f). Conversely, by symmetry, if a point source of light is placed at the focal point, the lens converges the beam to be parallel. This is known as collimation/collimating. This is sketched in figure 1. Parallel wavefronts can be approximated by light at a great distance (for example light from the sun, or even a very distant light source). Point sources can be simulated by small pin pricks in screens with lights behind them.

Figure 1. Simple ray trace view of the focal point of a lens.

The experiment makes use of an optical track that allows for the precise positioning and fixing of optical components. This is essential for many optical experiments and instruments, where the alignment of optical components can be critical. Experiments in optics are different from most other types. This is due to the fact that an optical beam is required to pass through or interact with a number of optical components that consequently need to be carefully aligned. This is a skill that benefits from patience and practice. This experiment provides a (relatively forgiving) introduction. As with any optics experiment, avoid touching the optical surfaces as much as possible. A simple tip to remember is to constantly look at the alignment of the lenses along the track. They should be broadly in a straight line and the same height. If they are not (heavily staggered, or up and down like a roller coaster) then your light path is equally doglegged through the lenses, and in the extreme case you may even be picking up light from some other (stray) source. This is not good, and probably means your first lens is pointing the light significantly off the axis of the track.

39

Simple optics form the basis of cameras, microscopes, telescopes and the eye. The techniques used are ubiquitous in scientific experiments, particularly in spectroscopy and imaging (e.g. microscopes, telescopes etc).

Apparatus 1.5 m optical bench with Vernier scale, 40 W shielded incandescent light source, various optical holders, lenses, filters, plates and screens. 2. Experiments

Reminder: Take care when handling optical components: The lenses are made from glass and may break if dropped. If this occurs do not attempt to clean up, instead call the demonstrator, supervisors or lab technician. In addition hold lenses at their edges and above the benches when mounting into their holders.

Experiment 2.1 Collimated beams (and determination of focal length)

This section considers collimated light i.e. light whose rays are all parallel to the principal axis. When such light (shown in figure 1) is incident on a converging lens it all passes through the principal focus on the opposite side of the lens. Likewise rays emanating from a principal focus emerge parallel to the principal axis (or collimated) from the lens. These rays are central to understanding optical systems through ray diagrams. Collimated beams, formed by placing objects at the focus of a lens, are often exploited in optical instruments such as spectrometers.

“Auto-collimation”

The properties of collimated beams described above form the basis of a rapid method for finding the focal length of a lens (this experiment) and for producing a collimated beam of light (the next experiment).

Place a pinhole (which will act as a point source of light or the ‘object’ in figure 2) about 10-20cm from the lamp with its black side facing the lamp.

Mount a flat mirror approx.. 50 cm with lens 1 between the pinhole and the mirror.

The principle of the approach here is illustrated in Figure 2. The mirror reflects light back into the lens and towards the pinhole. A sharply-focused image is produced immediately alongside the pinhole only when the beam between the lens and the mirror is parallel and the object distance is equal to the focal length. (Obviously if the pinhole is exactly centred the image is formed coincident with the pinhole and you won’t see it – move the pinhole a bit to check this hasn’t happened by chance).

Figure 2 Focal length determination by “auto-collimation”

Adjust the position of the lens in order to obtain a sharply focused image of the pinhole next to the actual pinhole.

Find the focal length of lens 1.

40

Experiment 2.2 Measurements with a collimated beam

Remove the mirror and instead after lens 1 place a second lens holder and then a screen. With no lens in the second holder it is likely that a number of images of the pinhole will appear on the screen - this is a consequence of a combination of the light source that consists of an extended and non-uniform filament and the larger hole now being used. However, the light may still be considered to be collimated (the separation of the images should not change as the screen is moved although the size of each image will).

Place lens 2 in the holder and move the screen in order to determine its focal length, f. To convince yourself that the light is collimated and the separation between the two lenses does not matter, repeat this for the second lens at positions of 60cm and 90cm on the optical bench (f should not change).

Repeat for lens 3.

Experiment 2.3 Radius of curvature of a lens (+ determination of refractive index)

There are a wide variety of experiments that can be performed to examine the properties of lenses. The following (slightly quirky) example is included since it is a convenient way of determining the radius of curvature of convex lenses and knowledge of this value allows the refractive index of the material used to be determined.

The principal of the measurement is shown in figure 3. A source S of light (a pinhole again) transmits lights onto a lens. However, although most light is transmitted some is reflected (for an air/glass boundary ~5% can be reflected) enough to form a visible “return” image alongside the source (see background information).

The condition for forming a return image (shown in figure 3) is a separation, u, between source and lens such that following refraction at the first (left hand side) air/glass boundary the light rays are incident normal/perpendicular to on the second (glass/air) boundary. Then at the same time (i) the main, transmitted part of the beam forms a virtual image at C and (ii) the reflected beam retraces its path back to and forms an image at the source.

Although use is made of the reflection, calculations are based on the formation of a virtual image (i.e. light refracted through both interfaces). Since a virtual image is formed at C, the sign convention dictates that v is negative, however C is at the centre of curvature for the r.h.s. boundary and that magnitude of v is the radius of curvature (for a thin lens).

Figure 3: Condition for forming a reflected image at the source (light rays are normally incident on second boundary and retrace their path back to the source). Under these conditions (and for a thin lens) the virtual image is at the centre of curvature of the rhs boundary.

41

Perform the following for all three lenses: Place the pinhole (acting as source S) a suitable distance from the lamp. With the mirror removed position the lens to obtain a “return” image of the pinhole

close to the pinhole. Measure u and calculate the virtual image distance v using equation 4 found in the

background information at the end of the text (remember that v is negative). Find the radius of curvature of the other surface of the lens in a similar way. Use the fact that v is equal in magnitude to the radius of curvature of the appropriate

surface of the lens to calculate the refractive index of the lens material.

Experiment 2.4 Image formation (and determination of focal length)

This experiment examines the conditions for producing and the nature of an image of an object (a cross hair on a screen) through a single bi-convex, thin, spherical glass lens.

First measure the dimensions of the cross-hair on the clear slide (the horizontal will be used to calculate the magnification of images produced).

Accurately position the lamp at 0 cm and the clear slide with cross hair at 20 cm (this is close enough for a reasonable throughput of light whilst avoiding images of the filament in the bulb).

Next position the screen at 110 cm (separation to slide = 110 - 20 = 90 cm) and lens 1 in its holder between the slide and the screen.

Move the position of lens 1 and find the two positions at which an image of the cross hair is clearly focused on the screen. Note the nature of the image compared to the object. This is tricky. There really are two distances that produce images. Be patient and work carefully to find the two positions.

Adjust the vertical position of the lens and the lateral position of the slide and lens so that the image is roughly in the centre of the screen for both positions (to roughly align the system).

For screen positions starting at 110 cm and decreased in 5 cm steps find the two focusing positions for the lens and the vertical height of the image (with errors) noting your values in a suitable table. Finish the sequence by using smaller steps to find the minimum slide/screen separation for which a well focused image is possible.

Plot a graph of 1/u versus 1/v and use the intercepts to determine the focal length of the lens, f. What is the value of the gradient and is it as you would expect?

Compare the v/u and y/x values obtained, and comment on the conditions at the minimum slide/screen separation (for example compare u, v and f and consider the magnification).

3. Background Information

3.1 Geometric optics Geometric optics (or ray optics) considers the propagation of light in terms of a single line or narrow beam of light, through different media. It is a very useful way to consider optical systems especially when imaging is involved.

Geometric optics is based on the consideration that light rays: propagate in a rectilinear (straight-line) path in homogeneous (uniform) medium

42

change direction and/or may split in two (through refraction and reflection) at the interface or boundary with a dissimilar medium (only two media are considered here: glass and air).

Although powerful in understanding the geometric aspects of optical systems, such as imaging and aberrations (faults in images) it does not account for effects such as diffraction and interference.

3.2 The interface between two media: refractive index and Snell’s law The two media of concern here are air and glass and the parameter that characterizes their optical property as far as geometric optics (and lenses) is concerned is their refractive index, n. Refractive index, n relates to the speed of light in media and is defined

medium a in light of speed

vacuum a in light of speedn [1]

By definition the refractive index of a perfect vacuum is unity (i.e. exactly one). The refractive index bears a close relationship to relative permittivity, εr and can be understood to result from the interaction between matter and light’s electric and magnetic fields. Light incident upon a boundary between media with different refractive indexes will be reflected and transmitted. In addition, the transmitted light may be “refracted”, i.e. it changes direction as described by Snell’s law.

For light travelling from air to glass (see figure 4) Snell’s law can be expressed as

glassair

glass

t

i nn

n

sin

sin

[2]

Where the angles are as defined in figure 4 and nair and nglass are the refractive indices of air and glass respectively.

Figure 4. Behaviour of a light ray travelling from air (low n media) to glass (higher n media). The light ray is partially reflected and transmitted. The transmitted ray changes direction, (is refracted) at the interface according to Snell’s law (θi, θr and θt are the angles if incidence, reflection and refraction of the light ray respectively). - Note that a ray with an angle of incidence of 0o does not deviate at the boundary.

r

t

i

air

glass

43

Material n Polycarbonate ~1.58

Air ~1.0003 Glass 1.48-1.85

Table 1. Some refractive index values

3.3 Lenses A lens is an optical component that in transmitting light rays uses refraction (i.e. the application of Snell’s law) to cause them to either converge or diverge. Lenses are usually constructed out of glass or transparent plastics.

The lenses used here will be “thin”, glass bi-convex (converging) spherical lenses as shown in figure 2 with its main characterizing features: The axis of symmetry of a lens is known as its “principal axis”. Lenses usually also

have a very good “axial symmetry”: the behaviour of the lens varies with distance from the axis - but is independent of the direction from the axis.

A “bi-convex” lens is one that bulges outwards both sides from its centre. The bulge is characterised by the radius of curvature of the left and right hand side

surfaces, r1 and r2 respectively. A “thin” lens is one whose thickness along its principal axis (d in figure 5) is much

smaller than its focal length, f, i.e. d << f. It is an approximation that permits simpler equations to be used.

A “spherical” lens indicates that the front and back faces can be considered to be part of a sphere which has an associated radius (also known as its “radius of curvature”).

Light rays parallel to principal axis and incident on the lens will, after transmission, all pass through the “principal focus” of the lens on the opposite side (light can travel in either direction so the reverse is also true and there are two “principal foci”). Figure 3 explicitly shows this.

The distance from the optical centre, Oc of the lens to the principal foci is known as the focal length, f of the lens.

Planes perpendicular to the principal axis and passing through the principal foci are called “focal planes”.

Figure 5. Main features of a bi-convex lens.

3.4 Image formation, ray diagrams and sign conventions Reading this page you are using a convex (converging) lens in your eye to form a “real image” on your retina - it is real in the same sense as the image on a cinema screen is real. In forming the image the light from a point on the page travels through all parts of the

F F

optical centre, Oc

principal axis

focal length, f

d

principal foci, F

lensr1

r2

44

lens. A consequence of this is that image formation can be understood by considering any convenient rays of light as shown in figure 6.

Figure 6. Formation of a real “image” of an “object” as understood through ray tracing (x and y are the heights of the object and image respectively and u and v are the distances of the object and image from the optical centre respectively.

Three convenient rays of light (labelled 1, 2 and 3 in figure 6) are: Ray 1. A ray parallel to the principal axis which after refraction passes through the principal focus. Ray 2. A ray passing largely undeviated through the optical centre. Ray 3. A ray that passes through the principal focus on the object side of the lens and therefore emerges from the lens parallel to the principal axis.

Any two rays of light are sufficient and most textbooks use rays 1 and 2.

In addition to “real images” in optics there is also the concept of “virtual images”. In this case rays appear to diverge from a point on an object. This concept is more commonly used with diverging lenses, is used in experiment 2.4, but its simplest example is a flat mirror where the image of an object is perceived at twice the distance from the object to the mirror.

In order to form equations that relate, for example, the focal length of a lens to the distances of the object and the (real and/or virtual) image from the lens for all possible situations (for example to include diverging as well as converging lenses) it is necessary to adopt a “sign convention”. The convention specifies the algebraic signs that must be given to the various lengths in the system. Different textbooks may employ different conventions and therefore have slightly different equations (which is mildly annoying).

General “University physics” textbooks are not very explicit in the conventions they employ, therefore the convention adopted here is that used in “Optics” by Hecht (publisher Addison Wesley).

In this convention optical beams enter the system from the left and travel to the right (as in figure 3). Using the symbols used in figures 2 and 3 the signs used are explained in table 2.

F F

object

image

u v

x

y

1

2

3

45

Quantity Sign

+ -

u real object virtual object

v real object virtual object

f converging lens diverging lens

x erect object inverted object

y erect image inverted image

Magnification (m = x/y) erect image inverted image

r boundary left of Oc boundary right of Oc

Table 2. Meanings associated with the signs of thin lens parameters

Using this convention and by considering “similar triangles” in figure 3 it can be shown that:

the linear magnification u

v

x

ym [3]

and that fvu

111 [4]

Equation 4 is known as the “thin lens equation” or the “Gaussian lens equation”.

Another useful equation, which relates the focal length, f to the radii of curvature, rl and r2, of the surfaces of the (thin) lens and the refractive index, n, of the material from which it is made is the lens maker’s equation:

r

1

r

11n

f

1

21 [5]

Note that for the bi-convex lens shown in figures 2 and 3 under this convention the first radius is positive and the second is negative.

46

Experiment 6: Introduction to multi-meters and oscilloscopes

Safety: The cell used in this experiment is low voltage (~2 V) but capable of delivering high currents if a low resistance circuit (e.g. wires or an ammeter) is connected between its terminals. There is no danger directly from electricity here but with high currents, components such as the wires can get very hot and it is possible to damage both these and electrical meters. Take care to follow the written instructions and consult a demonstrator if at all unsure.

Apparatus: 1x Fluke 21 and 1x Fluke 111 multi-meters, GW Instek GDS-1022 oscilloscope, Thandar TG 102 Function generator, 1x cyclon cell, 2x 4.7 MΩ resistors, breadboard, jump leads suitable for bread board.

Outline The purpose of this session is primarily to provide an introduction to instruments for the generation and especially the measurement of dc and ac voltages, and to become familiar with very basic circuit construction using standard breadboards. Most students will have used multi-meters (without necessarily understanding how they work) but far fewer will have used oscilloscopes. This is a structured training session so the experiments (such as they are) have been chosen to illustrate characteristics of the instruments. Students should ensure that they make experimental notes in their diary. Although none of it is demanding, there is a lot to get through and students will need to work efficiently. Concluding remarks will relate to the characteristics of the instruments used.

Experimental Skills Use of instruments for measuring ac and dc electrical circuits. Awareness of the importance of understanding the limitations of such meters. Simple circuit building: including use of coaxial leads and breadboards. Wider Applications Oscilloscopes are widely used in teaching laboratories as measurement instruments

and in research labs as test instruments. As is examined here the addition of an instrument to an electrical circuit will perturb

(affect) that circuit: this is analogous to the perturbation of quantum mechanical systems by measurements made upon them.

1. Introduction 1.1. Reminder: The Basics of Electricity The term “electricity” usually refers to the flow or movement of charge, Q (units coulomb, C). In man-made metallic electrical circuits it is negatively charged electrons that move around to provide some useful function. Whenever there is a flow of charge there is said to be an “electrical current”.

A good analogy is water flowing through pipes. The amount (volume or mass) of water that has flowed past a point is analogous to charge; the rate at which water flows past a point (volume or mass/second) is, similarly, termed a current.

For electricity the equation relating charge moved to current is

electrical current,� (unit amperes,�) =charge,� (unit coulomb,�

time (unit second,�)

47

Or � =

or � = �� [1]

As with anything that moves, a push (force) is required to get it going. For water the force is provided by a pressure difference between two points, in the case of electricity, a potential difference (voltage). The flow of water or charge is limited by the resistance of whatever it is flowing through. A slightly opened tap has a high resistance and so the water flow is small, fully opened the resistance is much less and the flow is much larger. For electrical resistance the relevant equation is described by Ohm’s Law:

Resistance,� (unit ohms,� ) =voltage,� (unit volts,�)

current,�

Or � =

� [2]

A difference between water and electricity flow is that water generally flows in one direction whereas electricity can move in one direction (known as “direct current (dc)”) or to alternate in direction (“alternating current” or “a.c.”). In dc circuits the direction of the current is governed by the sign of the applied voltages (conventionally it flows from + to -). For some devices it is important to get this right and components and wires are coloured as an aid to this. Conventionally red is positive and black is negative. In ac circuits the voltage alternates +ve and –ve so colour coding has no meaning.

1.2 Describing a.c. voltages In this session the ac voltage used will have a sinusoidal waveform (see figure 1):

� = ������� = �����2��� [3]

where VA is the amplitude of the waveform, f is the frequency (Hz) and ω is the angular frequency (radians per second). This is the same form as the mains supply, however square and triangular waveforms are also common in the laboratory. The size of the voltage is most obviously described by its amplitude, VA, however there are more commonly used alternatives as shown in figure 1:

Figure 1. Sinusoidal waveform (V = VAsinωt) showing the alternative amplitudes (explained in the text) that may be

used: here VA = 1 V, Vpp = 2 V and Vrms = 1/√2V = ~0.707 V.

-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

0 2 4 6 8 10 12 14 16

Sign

al v

olt

age

/V

Time /s

VA

Vpp

Vrms

48

The r.m.s. (root mean square) amplitude, Vrms, is used as it gives the same heating effect as a steady direct current or voltage and so is useful and easy to use in calculations. Most ac voltages are quoted as rms values (+digital multi meters give ac voltages as rms values). The difference between the maximum and minimum value is known as its peak to peak amplitude, Vpp. Peak to peak amplitudes are often convenient to measure, especially using oscilloscopes) and are an entirely acceptable way of describing signals.

For sine waves the relationship between the different amplitudes is given by:

�� =

���

2= √2����

[4]

See “Principles of Physics” by Halliday and Resnick (Ed 9, section 31.10) for a consideration of power in alternating current circuits.

1.3 Some (health and safety related) numbers: Mains electricity here is 230 V* ac (it alternates at 50 cycles per second or 50 Hz) and

the maximum current that can be delivered from a standard wall socket is 13 A ac. This stuff is dangerous.

Mobile phone or PC chargers might be 12 or 18 V dc and be capable of delivering a current of 10’s of milli-amperes (10’s mA or 10’s x10-3 A). This isn’t dangerous.

Rather more obscure – electrical signals in the eye can be up to ~200 µV (micro volt) (~200x10-6 V). This definitely isn’t dangerous.

Safety guidance: A rule of thumb used in our teaching laboratories is that voltages of 60 V DC or 25 V* AC or less are safe. Much higher voltages can be safe but it is then necessary to consider whether dangerous currents can be delivered (because in the end it‘s current that kills not voltages).

* These are rms amplitudes!

2. Experimental This training session starts by introducing multi-meters, before moving on to oscilloscopes (and necessarily also function generators). The session is ended with a couple of experiments to provide practice with the instruments and to illustrate some of their limitations. Required information concerning the instruments is given as required: fuller accounts may be found in the appendix of this lab book.

Important: remember to make a note of any measurements and their precision in your diary. In addition be advised that error propagation calculations are not required as part of this session.

2.1. Hand held digital multi-meters (DMM) There are a wide variety of this type of meter (that may look different but are basically similar), they are used extensively in domestic, industrial environments and are accurate and precise enough to be entirely suitable for a wide range of applications, including undergraduate teaching laboratories. Their main function is in measuring current and voltage (ac or dc) and resistance.

Have a look at the multi-meters provided (a photo of a Fluke 111 is shown in figure 2): there are three main areas to get to grips with: the three sockets; the selector dial, and the display:

49

The sockets (terminals) – you use two of the three. The common (COM) is always used, the other depends on what it is that you want to measure (I, V or R).

The selector – there’s more than you’ll need here. You will probably only need ac (~) or dc (=) current (I) or voltage (V) and resistance (Ω).

The display – tells you the value and units of the parameter you are measuring. These can come with prefixes to indicate multiples of the base SI unit.

Figure 2 A Fluke 111 and the main features of its front panel

2.1.1 Messing around with a pair of multi-meters

2.1.2 Multi-meter as an ohm (resistance) meter. Set up the Fluke 111 multi-meter as an ohm meter with two leads in the appropriate sockets. Then measure resistances in the following situations: To measure the resistance of your body hold the bare metal contacts of the leads one

in each hand (it is unlikely to be stable so make a note of the range together with any observations on the behaviour).

Connect the bare contacts together. Hold the leads with the contacts apart so that there is no electrical connection at all.

Equation 2 indicates that to determine a resistance of something it is necessary to know both the voltage difference across the something and the current flowing through it. So to make the measurements above the multi-meter has (somehow) generated a dc voltage, measured the dc current (when there is one) and divided one by the other to obtain a resistance value.

Note: A consequence of having to work this way is that resistance measurements with an ohm meter should never be performed on a live circuit, i.e. one that already has voltages within it. At best the meter will get confused, at worst the equipment may be damaged.

2.1.3 Multi-meter as a voltmeter So, a digital multi-meter (DMM) operating as an ohm meter generates a voltage. Therefore with a second DMM it ought to be possible to measure the voltage. Here again use the Fluke 111 multi-meter as an ohm meter and the Fluke 21 multi

meter as a dc voltmeter. Using two leads connect the two devices (using the appropriate sockets).

50

The voltmeter indicates the voltage generated by the ohmmeter, whilst the ohm meter measures the resistance of the voltmeter.

Use the values and equation 2 to calculate the current that must be flowing in this circuit.

2.1.4 Multi-meter as an ammeter In a similar fashion to the previous section, here the current generated by an ohm meter will be measured directly by using a second meter as an ammeter. This is the only time that current measurements will be made in this session. Use the Fluke 111 multi-meter as an ohm meter and the Fluke 21 multi meter as a dc

ammeter (because this meter can measure lower currents than the Fluke 111). Using two leads connect the two devices (using the appropriate sockets).

The ammeter indicates the current (in mA) generated by the ohmmeter, whilst the ohm meter measures the resistance of the ammeter.

Use the values and equation 2 to calculate the voltage that must be produced by the ohm meter to generate this current. (It shouldn’t be a surprise that the voltage is a lot less than before).

Summary so far: Multi meters can be used to measure various electrical properties. As a resistance (ohm) meter the multi-meter generates a voltage (and so a current) and

then works out R = V/I. Consequently it must not be used in live circuits. Voltmeters have high resistances (so as not to draw current). Ammeters have low resistances (so as not to drop voltage).

2.2 Introduction to using the oscilloscope (and signal generator) Oscilloscopes are useful for examining signals that vary over time, i.e. their waveforms. To see this ac signals from a signal generator will be examined. Signal generators produce a time varying voltage signal of various shapes, the common ones of which are sine, triangular, and square whose frequency and amplitude can be controlled.

(Note: a summary version of how to use the Oscilloscope can be found in background notes).

2.2.1 Making electrical connections

Figure 3. Front panel of the GW Instek oscilloscope. The important features initially are shown with circles (Power, signal input channel 1, Volts/div (y-axis), and time/div (x-axis).

51

We’ll be using one of the input channels so this simplifies the set up of the scope.

Use two coaxial leads both with 4mm termination. Connect the first lead between the 50Ω output of the signal generator (BNC) and the

voltage input of either of the Fluke DMMs (set to operate as a voltmeter). Connect the second lead from the Fluke to the channel 1 input of the scope but be

careful to ensure that the green (earth) leads are connected together otherwise the signal will be shorted out.

With this arrangement the two instruments receive exactly the same voltage signal. Initially, only the function generator and oscilloscope will be used; comparisons with the DMM will be made later.

2.2.2 Set up the signal generator (Thandar TG 102) Turn on the signal generator and set it to sine wave DC offset off (button out); ~50 Hz

(i.e. 0.5 Hz on dial and x100 on multiplier); output level minimum and 0 dB. We’ll start at ~50 Hz since this is mains supply frequency and we can reasonably

expect the DMM to give correct reading at this frequency.

2.2.3 Set up the oscilloscope (GW Instek GDS-1022 shown in figure 3) Turn on the oscilloscope and when the GW Instek banner has disappeared press

“Save/Recall” and then select “Default Setup”. (The default setup is the obvious configuration from which to start and overcomes the issue that the oscilloscope remembers its previous configuration which may or may not be appropriate or the same for all students).

Channel 1 and channel 2 are then positioned at the centre of the top and bottom halves of the screen respectively.

We’ll only be using channel 1; to cancel channel 2 press its button twice (once to select, 2nd time to turn it off). Note: the channel 1 and 2 buttons are colour coded yellow and blue respectively and this colour code is also used on the LCD display.

Now press “Autoset”: with what is a reasonable signal level (>30 mV) and frequency (>20 Hz) hopefully the scope has been able to choose suitable signal (y axis) and time (x axis) ranges and triggering conditions* resulting in a stable sine wave on the screen. If this isn’t seen please find a demonstrator.

*A trigger ensures that each update of the trace starts at the same point of the oscillating cycle and consequently gives a stable display, otherwise the update would be with an arbitrary phase shift, and you would end up with an unstable trace on the screen.

2.2.4 Finding your way around the LCD display There is a lot of information around the periphery of the display, i.e. around the sinusoidal trace seen in the centre.

To the left of the trace: A number (the channel number) and an arrow (►) showing the position of a 0 V

signal. The position of 0 V (and so the whole trace) can be altered by rotating the “vertical”

channel 1. When this is done the position of 0V on the screen (versus the central horizontal axis) is displayed in the bottom right hand corner of the display. Try this.

To the right: The broad blue column contains a changeable “menu” of choices and measurements.

This can be ignored for now. On the trace is an arrow (◄) indicating the triggering voltage. Triggering is central to

the operation of oscilloscopes and so will be considered here.

52

To understand the general principles simply investigate - by rotating the Trigger level knob (on the right of the panel) clockwise and anticlockwise slightly about its original position. Observe that: (i) the arrow (◄) indicating the trigger voltage moves up and down on the right of the

display

(ii) a “Trigger level = xxx mV” appears in the bottom left of the screen

(iii) the waveform moves to the left and right.

What’s happening is that the oscilloscope displays one trace and then almost immediately replaces it with another. The reason the trace appears stable on the screen is that each trace is made to start in the same place, i.e. it is triggered under the same conditions. In fact, with this digital scope the trigger point is in the centre of the x axis on the screen. Adjusting the trigger level changes the voltage at this position. Have another play with the trigger voltage if you want to. Finally, adjust the trigger voltage until it goes out of the range of the oscillating signal. Once this happens the system cannot trigger and the screen simply updates randomly leading to an unstable trace. Underneath the trace: On the LHS the scales for the two channels are given. These numbers give the voltage

corresponding to the vertical side of a (~1 cm) square (or volts per division).

Note that the active channel is denoted by a number on a coloured background whereas the inactive channel is a number on a dark background.

Next along is the “horizontal status”: “M” is for Main mode (of the scope) and the time corresponds to a ~1cm square (or time per division).

On the right, both with a green background are a “T” (for Trigger) followed by the triggering conditions: in this case an edge (i.e. a changing signal) on the channel 1 waveform, in fact here a rising edge (i.e. the signal must be increasing with time). Below this is an “f” (for frequency) followed by a value.

The frequency value is hopefully ~50 Hz. At this point make a note of the value measured and compare it with the frequency on the function generator. The oscilloscope is very good at measuring frequency oscilloscope and is much more reliable than the function generator.

Above the trace: The “▼” symbol above the centre of the screen and the symbols above it relate to the

horizontal position of the waveform. This can be altered by rotating the “horizontal” knob. Try it.

To the right in green the “Trig’d ●” indicates that a signal is being triggered (on taking the trigger voltage out of the range of the signal this changes to “auto ●/o” meaning that the screen is updated regardless of the trigger conditions).

Make a note of the y axis volts per division and x-axis time per division chosen by the scope using its “autoset”. Reminder: “autoset” decides on triggering, y and x axis ranges but you can subsequently alter these. Try varying the y axis volts/div and the x-axis time/div in order to determine the available ranges.

53

2.2.5 How to measure signals with the oscilloscope Oscilloscopes aren’t precise measurement instruments although they are good enough for most UG experiments. An advantage of a digital scope over older analogue scopes is that since signals are digitised it is easy to perform mathematical manipulations on them, i.e. they will do some (and sometimes all) of the work for you.

Here the ~50 Hz signal set up earlier will be measured.

Using “Cursor” (to do some of the work) Press “Cursor” and two vertical lines appear in the screen that are used to give 2

horizontal positions (X1 and X2). Or, by pressing the X↔Y function key accesses 2 vertical positions (Y1 and Y2). The position of the cursors is controlled by the “Variable” knob (at top left). With the function key X1X2 (Y1Y2) selected the separation of the cursors is fixed. To control them individually select X1 (Y1) or X2 (Y2). Take measurements of: the amplitude, the peak to peak amplitude (i.e. from maximum to minimum) and the period (and so the frequency).

Note that if the cursors are positioned one period (T) apart the scope calculates the frequency (f = 1/T) – and since averaging over one period is poor experimental practice be aware that this really is a rough measurement. Using “Measure” (to do all of the work) Press the “Measure” menu key and the peak to peak voltages (Vpp), the average

voltage (Vavg)), the frequency (f) appear directly in the blue column (along with other stuff that will be ignored here).

Record the measurements and briefly compare with those obtained using the cursors.

3. Electrical measurements with multi meters and scopes The session will finish off by using the meters. The two experiments have been chosen to illustrate limitations with the meters and, as a result, the care and understanding required when using them. The second involves making a simple circuit on a “breadboard”, something that few first year students will have come across before.

3.1. Comparing ac voltage measurements made by an oscilloscope and a DMM As mentioned previously multi-meters are likely to be designed to measure 50 Hz (supply) voltages: which begs the question what is their reliable frequency range?

Note: This experiment uses the same circuit arrangement as above (as set up in $2.2.1). The DMM may have turned itself off and may need to be turned off and on again.

Start by confirming that the voltages measured by the two instruments agree at ~50 Hz. Reminder: ac voltmeters give rms values, whereas measurements from oscilloscopes will be amplitudes or peak to peak amplitudes – see equation 4.

Now repeat the measurements as a function of frequency in order to determine the range is which the multi-meter can be trusted: see suggested data table below.

Examine the range 10 – 105 Hz, start by increasing the frequency a decade (factor of 10) at a time, before making finer adjustments in areas of interest (a total of 10 points should be sufficient).

Table Suggested format of data table

Frequency, f/Hz log10f DMM /V Scope Vpp /V Vpp/2√2 /V

54

Plot a graph of DMM voltage versus log10f. Be clear on the reliable frequency range for the multi meter and the criteria used in

deciding.

3.2 A potential divider circuit measured by a multi meter Ideally the introduction/inclusion of a meter into any circuit will not (significantly) affect the circuit. If it does the meter reading is potentially misleading. This section examines the consequences for voltage measurements of the large but finite impedance of an instrument. Although this experiment could be performed either ac or dc with an oscilloscope we’ll revert to dc measurements with a multi meter.

The circuit to be measured shown in figure 4 is a simple voltage dividing circuit. Assuming that there is no meter attached or that such a meter does not affect the measurement it can be understood by considering that the current through the two resistors must be the same and so

� =

��

��=

��

��

or ��

��=

��

��

[5]

where R1 and R2 are the two resistors and V1 and V2 are the voltages across them.

Alternatively �����= �� + �� = ��� + ��� = �(�� + ��) [6]

Figure 4 Circuit arrangement for a potential divider. Ideally, adding the meter to probe the circuit will not affect the

voltages across each resistor.

The circuit will be constructed on a bread board (or prototype board). Before making the resistor circuit spend no more than 5 minutes doing the following in order to understand its electrical connections (see the “background notes” for a description of these boards):

Connect (thin) jump leads to 2 of the 4 mm posts ensuring that some bare wire protrudes from the post: a common cause of poor connections is pinching down on the wires’ insulation.

Connect the DMM as an ohmmeter to the same 2 posts.

55

Touch the bare, free ends of the leads together and confirm that you have a short using the meter.

Now plug the free ends of the leads into different parts of the perforated block to investigate which points are connected and not connected. (The figure in the background notes section indicates how rows are connected so there is no need to make diary notes on this).

Now make up the series circuit using the 2 nominally identical high value resistors (both ~ 4.7 MΩ) provided: this will comprise of the 2 resistors on 3 rows and wires from the each row to all 3 of the 4mm posts. Before connecting the cell use the 4 mm posts and a DMM as an ohmmeter to

measure and note the value of both of the resistors used. Change the DMM to act as a dc voltmeter and connect the cell across both resistors. Connect the voltmeter first across both resistors (and note the cell voltage) then across

one of them (again noting the voltage).

If the two resistors are identical (R1 = R2) this potential across one resistor should be half that of the cell voltage, that it isn’t results from the finite resistance of the voltmeter (RV). The parallel combination of R1 and RV has an overall resistance of Rov:

1

���=

1

��+

1

��

[7]

resulting in a decrease in the voltage across the R1/RV combination.

Use the measured voltages to calculate a value for RV (in equations 5 and 6 R1 must be replaced by Rov)

Compare RV with the measurement obtained earlier in Section 2.1.2.

Note: As the resistance of the voltmeter/multi-meter becomes significantly greater than that

of the circuit the measurement becomes less affected. Oscilloscopes in general have lower resistances than multi-meters (try measuring it

with the multi-meter and/or check out the front panel of the scope).

56

Experiment 7: Magnetic Fields and Electric Currents.

Equipment List: Current balance, rheostat (a coil of wire with a slider used to vary its effective resistance), Weir p.s.u., multi-meter (rated to 10 A), small magnetic compass, A4 paper.

Safety. The current balance may spark. The resistor can get VERY hot over time.

Outline The shape of the magnetic field lines in the vicinity of two separated permanent magnets and around a current carrying wire is investigated using small magnetic compasses. The force on a current carrying wire passing through the magnetic field of permanent magnets is then investigated using a “current balance” and used to obtain a value for the size of the magnetic field. The experiment illustrates the properties introduction to magnetic materials and essential concepts of electromagnetic theory.

Experimental skills Make and record measurements of magnetic field lines. Familiarity with the magnitude of magnetic fields generated by electrical currents and

permanent magnets. Experience of the effect of stray magnetic fields in a laboratory environment. Application of vector cross products to real situations. Use of ballast resistor to limit current flowing in circuit.

Historical perspective and wider applications

Magnetic materials: the use of lodestone as a crude magnetic compass dates to ~1000 BC.

Electromagnetism: In 1819 in Copenhagen Hans Oersted discovered, almost by accident, that a compass needle can be influenced by a nearby electrical current. This was the birth of electromagnetism, one of the most important fields in both science and engineering, with profound influence on modern life: Michael Faraday discovered electromagnetic induction and developed the idea of a

field for dealing action at a distance effects. These ideas led to delopment of the dynamo, motor and transformer. James Clerk Maxwell put the field ideas into mathematical form and predicted

electromagnetic waves. Einstein’s consideration of the need for relative motion led to the theory of relativity.

1. Introduction Magnetic fields can arise from magnetic materials and from moving charges. This experiment is concerned with examining both such fields and also the forces resulting from the interaction between magnetic fields and moving charges (due to a current flowing through a wire).

1.1 Magnetic fields Magnetic fields are vectors and therefore have both a direction and a magnitude (or strength). They are produced by magnetic objects or by moving charges. The oldest known magnetic field is that due to the Earth and this leads to the concept of poles and the first way of defining the direction of the field. , i.e. a “North pole” will point to the Earth’s North pole (which since opposite poles attract magnetically must itself be a South pole). The direction of a magnetic field is defined to be that in which a North pole will move. Magnetic compasses point in the direction of a magnetic field, i.e. towards a magnetic south pole.

57

Magnetic fields can vary wildly in both magnitude and direction as a function of position, are therefore mathematically complex, and are often visualised by way of “field lines”. These are constructed by using arrows to indicate the direction of the field at various points and then connected by lines. The number of lines used must be limited and this is done in such a way that the density of the lines in the vicinity of a point gives an indication of the relative strength of the field. An example, representing a bar magnet, is shown in Figure 1. The permanent magnets used here are similar to the one shown except that their poles are wider than their length.

Figure 1: Magnetic field lines in the vicinity of a bar magnet [1]

Figure 1 also hints at another important property of magnetic field lines. Unlike electric or gravitational field lines they form loops. This relates to the fact that there is no such thing as a magnetic monopole.

1.2 Electromagnetic theory (and vector cross products) Electromagnetic theory gives the magnetic force, F, exerted on a charge, q, moving with velocity, v, in a magnetic field as

F = qv x B (N) [1]

At the same time the magnetic field generated by a point charge moving with velocity v is

rvB 2

0

4 r

q

(tesla, T) [2]

where r is the vector from the point charge to the point at which the field is determined and μ0 is the permittivity of free space (μ0 = 4π x 10-7 H/m or 1.26 x 10-6 TmA-1).

These definitions are given as vector cross products, so although students may be more familiar with the use of Flemings left and right hand rule for determining directions here it makes more sense to use the more general rules for dealing with vectors.

The case is illustrated for two vectors a and b is shown in Figure 2.

58

Figure 2: The cross products of two vectors a and b separated by an angle θ. The resultants are in a direction perpendicular to the plane containing both a and b.

For the cross product c = a × b the direction is perpendicular to the plane formed by a and b and its direction is given by the Right Hand Rule*: Imagine your right hand pointing along a. Curl the fingers around from a to b. The thumb then points in the direction of c.

* From this the coordinate system being use is said to be right handed. As drawn above, a × b = c is in direction = +z whereas b × a = − c is in the negative z direction. (In a left handed system following a left handed rule the directions are reversed).

Using this rule, and bearing in mind that the move charges in this experiment will always be negatively charged electrons, equations 1 and 2 can be used to determine the direction of both force and magnetic field vectors.

Note: Ultimately these two models for magnetic fields, poles and flowing currents, are identical and equivalent and the magnetic fields produced by magnetic materials originate in microscopic currents flowing cooperatively. The magnetic pole model is therefore a simplistic viewpoint but one that is very useful in many circumstances. Both approaches will be employed here.

1.3 Charges moving in a wire The above descriptions for individual charges whilst useful for considering the direction of force and field vectors requires development for the situation here where there are many moving charges (electrons) and all are confined to a metallic wire. For a conductor carrying a current in a magnetic field in the case where the current, I and field, B are perpendicular the force on the wire is given by

F = BIL (N) [3]

where L is the length of the wire in the field. This comes from a consideration of the number and velocity of charges experiencing the magnetic field and is derived in the lecture courses and in Young and Freeman. Somewhat similar considerations can be applied to the magnitude of the magnetic field around a straight conductor. The field lines in this case are circles concentric with the wire and decrease with distance r from the wire. For an infinitely long conductor the magnitude of the field is given by:

r

IB

20 (tesla, T) [4]

Magnetic field lines due to a current in a wire are shown in Figure 3.

a

b

a x b = c

b x a = -c

θ

z direction

59

Figure 3. Magnetic field lines surrounding a current carrying wire. For the direction of the field lines shown the current is in a direction out of the page.

2. Experimental

2.1 Apparatus (the current balance)

The equipment, shown part assembled in Figure 4, consists of a copper frame (scribed on one side) which balances on two pivot edges. A break in the frame, in the region of the pointer, ensures that any current flowing between the pivots only passes through one “arm” of the frame. The pointer can be positioned in the opening of a support that restricts its movement. The current carrying arm is placed in the magnetic field centrally between the poles of strong permanent magnets mounted on mild steel yokes. With this arrangement, the current, magnetic field and movement of the wire are all at right angles and so equation 3 applies.

Figure 4: Frame mounted on centrally positioned pivot edges. The pointer is to the left and is shown within the support. Current flows only through the arm on the right, passing between the poles of permanent magnets.

Electrical circuit: The copper frame has a very low resistance (~0.2 Ω) so to protect the power supply unit and the equipment (from high currents) a ballast resistance of ~5 Ω should be placed in series with the frame. The variable resistor (rheostat) provided is a

currentcarryingwire

magneticfieldlines

60

suitable ballast (in terms of resistance value and current capacity). The rheostat has three terminals and a maximum resistance of ~10 Ω. To obtain a resistance of ~5Ω simply move the top slider half way along the coil and make sure to use the top and one of the bottom connectors. The power supply unit (dc output) and an ammeter set to its 10 A range and also in-series completes the circuit.

When required use the dial on the poser supply unit to set the current.

IMPORTANT: Currents must not be allowed to exceed 2.5 A and reduce the current to zero between measurements.

Magnets: When making calculations it will be assumed that the “magnets” are exactly 5 cm in length, have no “edge effects”. No “edge effects” implies that the magnetic field confined to the region directly between the poles - in reality it spreads a little. This is addressed again in section 2.2.

Weights: In this experiment, small pieces of photocopier paper (cut up using scissors) will be used. A figure of merit for paper is its areal mass density and the photocopier paper used by the School is indicated to be 80 g/m2. Measurements show that this figure is accurate to +/- 1% and so the areal density should be written as 80.0 +/- 0.8 g/m2. This accuracy is more than sufficient for the purposes of this experiment.

Since the wire frame balances on a pivot, forces on the frame should be considered as moments. However if masses are added on the same section of the frame that passes through the magnets, the distance from the points of application of the force to the pivot is the same and it is sufficient to only consider forces.

Field line measurements: Early experiments examine the shape of (permanent) magnetic field lines and small magnetic compasses are used for this purpose.

2.2 The magnetic field lines associated with permanent magnets

The nature of the magnetic field surrounding a single permanent bar magnet with a similar geometry to that used in this experiment is shown in figure 1. This part of the experiment examines the more complicated case of: (i) two such magnets separated by a fixed gap; (ii) two such magnets separated by the same extent but mounted on a “U” shaped yoke.

Set up

On a fresh piece of A4 paper place the two magnets, centrally and with N pole facing S so that they attract first of all separated by the wooden block. The wooden block is not magnetic and so has no effect on observations).

Trace around the magnets so that they can be re-positioned if moved accidentally.

Experiment

Use the small compass to determine the direction of the field lines* in the vicinity of the magnets: find the direction of the field line at a point, draw an arrow in the position of the compass, move the compass along in the direction of the field line and repeat.

Concentrate on one side of the magnet and take enough measurements to illustrate symmetry and to generate a reasonably accurate impression of the field lines (as in Figure 1).

Repeat the process for the same magnets separated by a “U” shaped yoke (the magnets should still be oriented N-S and the wooden block should be removed).

Describe and attempt to account for the difference between the two cases.

*A useful point to note: after being disturbed the compass needle exhibits a damped oscillation, whose frequency increases with field strength.

61

2.3 Oersted’s experiment (A classic experiment of physics)

Reminder: Oersted’s experiment, that started the field of electromagnetism, was simply the observation that currents travelling through wires affected a magnetic compass in its vicinity. Here the effect will be used to confirm the cross product expression given in equation 2.

Set up

The equipment should be set up as shown in Figure 4, although the magnets are not required at this stage and it is not important for the frame to be balanced, it can be held horizontal using the support (shown on the left).

Connect the power supply unit using the dc output: Use red wires to connect the current balance to the positive output and black wires to the negative output (this will help when determining the direction of charge flow) and pass the current through an ammeter on its 10 A range.

Experiment Place the small compass close to the frame (as close as possible without touching) and

confirm, such as by increasing the current to 2.5 A and then decreasing it again in different positions around the wire, that the current has an effect on the compass. This, in essence, was Oersted’s experiment. Take care, the wire will spark.

Whilst a movement of the compass needle due to the current in the wire should be obvious it is true that the effect is weak. Most notably the contribution due to the current is competing with the Earth’s magnetic field (which varies with position but is in the range 30-60 μT) and with that due to the steel in the bench system.

Use estimates and observations to decide the origin of the largest contribution to the field experienced by a compass when it is as close as possible to the wire carrying a current of 2.5 A.

Passing a current of 2.5 A through the wire for short periods, and with reference to Figure 2, use the compass to determine the direction of the magnetic field. Confirm, through consideration of the direction of current/charge flow, that the direction is as predicted by equation 2. (Demonstrators will expect to see a suitably labelled diagram here).

2.4 Investigation of a force on a current carrying wire in a magnetic field

The current I (A) and length L (m) of wire in the field can be varied independently and the magnetic force F (N) measured by balancing it against the force due to known masses in the gravitational field. The magnetic field B (T) is determined by the strength of the permanent magnets and their separation and has a constant value that is measured in this experiment. Once the magnetic field strength has been found the apparatus is used as a mass balance to measure (relatively small) masses.

Set up

Connect the voltage source, the rheostat, the ammeter and the balance in series. The rheostat is a coil of wire with a slider used to vary its effective resistance. It is a useful way of controlling current in this experiment.

The next objective is to balance the frame with no magnetic forces acting on it. To aid this one side of each frame has been finely scribed. Locate the scribed grooves on the balance with the pointer between the balance indicator (this will limit the movement of the frame). Finely balance the frame by moving the small metal rider along the frame (best done with tweezers, but bear in mibd they are magnetic).

62

Position the magnet so that the frame lies centrally between the “magnet’s” pole-pieces.

Pass a current through the frame, ensuring that the current is such that the arm is raised. This upwards force will later be counterbalanced by weights placed on the same section of the arm that passes through the magnet.

Experiment

Cut out a square or rectangle of paper, measure its dimensions and place it on the balance.

Increase the current until the beam is balanced.

Repeat the previous steps using different or additional areas of paper.

Plot a suitable graph and use it to show that F is proportional to I and to calculate the magnetic field, B for the magnets used.

Note: Clearly it is important that the frame and rider do not move during the course of the experiment. If they move or are suspected to have moved it will be necessary to rebalance the system with no masses and no current flowing. References 1. http://hyperphysics.phy-astr.gsu.edu/hbase/magnetic/elemag.html (accessed 2/11/10)

63

Experiment 8: Radioactivity, counting statistics and half lives.

Important Safety Information

For this experiment you must receive training and your risk assessment must be checked by your demonstrator before you proceed with practical work.

Two radioactive sources are provided. These are both sealed to minimise the risk of leakage. When using radioactive materials, exposure should be minimised by: 1. limiting the amount of time exposed to the source; 2. maintaining a reasonable distance from the source; 3. washing your hands immediately after performing the experiment and certainly before consuming food and drink; In addition the Pa generator must always be used over the drip tray provided.

General Introduction You will perform some basic experiments in the measurement of radioactivity using standard pieces of equipment for detection of radioactive sources. The (effectively) constant radioactivity of a uranium oxide source is used to determine the correct operating voltage for a Geiger Muller (GM) tube. The GM tube is then used to perform two experiments: (i) measurement of background radiation and its analysis in terms of Poisson statistics, (ii) measurement of the (short) half-life of protactinium 234 (Pa234), an element in the decay series of uranium 238.

Aims and experimental skills Safe handling of mildly radioactive material. Setting up and use of Geiger Muller detectors. Analysis of “counting experiment” data using Poisson statistics. Determination of half-life values.

1. Experiment This experiment consists of three parts. In part 1 the operating characteristics of your Geiger-Muller (GM) detector are investigated; in part 2 background radiation is measured and analysed; in the final part, the half life of Protactinium234 is measured.

1.1 Setting up the detector Note: This section is concerned with setting the detector up for later measurements. Refer to Background section 2.5) First turn the counter on with the anode voltage set to 400 V to let it warm up for ~5

minutes. Use the warming up period to understanding how to operate the counter: Set it to

“counting” and “start”. The unit should then display the cumulative counts. These counts can be zeroed using the “reset” button.

Towards the end on the warm up procedure measure the background counts accumulated over a 10 s period - there should be something like 5 to 10 counts if the detector is working properly.

Now set the GM detector voltage to a minimum and place the UO2 ( "lollipop" ) close to the detector window. Slowly increase the voltage until counting starts. This is the starting potential. Record this voltage and count for one minute to give the count rate in counts per

64

minute. Increase the voltage and count for one minute. Repeat this procedure until the maximum voltage available is applied. (This voltage will be less than that producing onset of continuous discharge.) Plot the characteristics. Decide on the optimum voltage at which to operate the GM detector. (See section

1.2 Background radiation (+Poisson statistics) Due to the different sensitivities to different particles the measurement of background radiation by a Geiger Muller tube is not straightforward. However, comparative studies are possible and here the background detection rate is convenient for investigating the statistics of counting.

Measuring background radiation Refer to Background section 2.2. Poisson statistics involve counting events in defined time periods. Here the experiment involves noting the total count every 5 s for a period of 360 s - do not reset the counter every 5 s. This is quite intense so draw up a suitable table in advance that can be filled in during data collection.

Perform the data collection (following which note any relevant observations).

Analysis using Poisson statistics

The measured value required here (x in equation 2 in Background 2.2) is counts/time interval and will be an integer. The data collection methodology indicates that the smallest time interval that can be used is 5 s, however it is instructive to perform the analysis for both 5 s and 10 s intervals. (There is potential for confusion here so diary entries should be clear).

Data distributions Tabulate the counts for each 5 s (and 10 s) time interval (x) and their frequency (f(x)). Plot histograms (f(x) versus x) for both intervals, i.e. use separate plots. Determine the mean counts/time interval and the number of data points for the two

intervals. Use these to determine “expected” Poisson distributions using equation 2*, plot the points on the same graphs as the experimental data. Is the total noumber of timing intervals the same for both distributions? Why does this matter?

How do your results compare with the theoretical Poisson distribution? What is the signal: noise ratio in both cases?

* Important note: equation 2 represents a normalised distribution.

Remember to take account of your measured mean background rate in the proceeding measurements. Explain why you do this.

1.3 The half-life of Pa234 This Generator is supplied in a sealed translucent container which is virtually chemically inert, and under normal circumstances is leak proof. For storage, the generator is packed in an outer container.

Whilst in use the generator should be placed upside down, and after the experiment, the generator must be returned to its protective beaker. When not in use the generator must be stored with the plastic cap uppermost.

Check your risk assessment and especially remember to use the disposable gloves and perform the experiment over plastic drip tray.

65

Figure 1. Arrangement of source and detector

Remove the flask from the box. Shake the flask while holding it above the drip tray for a short period of time (10 second will be enough) until the contents have completely mixed.

Replace the source upside down as shown in figure 1 and record the number of counts per unit time. The easiest way to do this is to record the total number of counts (every 30 seconds) and work out the count rate afterwards. Continue until the count rate is roughly constant, i.e. for approximately 20 minutes.

Plot a graph of count rate versus time. Remember to take background counts into consideration. Comment on the graph obtained.

Finally, process your results to find the half-life of Protactinium-234. The half life can be found from the graph by measuring the time taken for the count rate (of Pa234) to fall by a half. If the count rate decreases exponentially to zero this task is easy, if not then you will have to decide which is the most sensible approach and explain what you decided and why. What is the most accurate graphical method to use to find T1/2 and why? How do you think the signal to noise ratio changes throughout the experiment?

Repeat the experiment if there is time to do so.

2. Background information Radioactive decay is the process by which unstable atomic nuclei lose energy. In this process particles of radiation are emitted, the three main types being alpha (He nuclei), beta (electrons) and gamma (photons). Since the energy involved in nuclear processes is high, the radiation is generally ionising. This property is exploited in the design of detectors of radiation but is also responsible for the danger associated with radioactive materials. The discovery of radioactive materials, by Henri Becquerel in 1896, lead to great advances in nuclear and other branches of physics. In one strand, it was realized that nuclei could not only break up (fission) but also join together (fusion) and that the fusion process was responsible to the power output of the Sun and the stars. This solved one of the great mysteries of science at the time - that power output based on gravitational forces implied a much shorter age for the Sun than that implied by the evidence of geology and evolution.

66

2.1 The mathematics of radioactive decay It was realized early on that the radioactive decay of nuclei is a “stochastic” or random process, i.e. it is not possible to predict exactly when a nucleus would decay, instead, only a probability of it decaying can be found. Following from this the rate of disintegration of a given nuclide is directly proportional to the number of nuclei N of that nuclide present at that time:

Ndt

dN [1]

where λ is the decay constant. However, rather than deal with 'probability of decay per second', it is more usual to describe the rate of decay of a radioactive material by its characteristic half-life. This is defined as the average time T1/2 it would take for half the

number of nuclei in the material to decay, or alternatively and as will be used as part of this experiment, for the decay rate to fall to one half of its original value.

2.2 The statistics of radioactive decay (Poisson statistics) Poisson distribution The measurement of radioactivity is a counting experiment; a detector counts the number of discrete events occurring in a fixed time interval. Very often with this type of experiment the data takes the form of a “Poisson distribution”. This is the second type of statistical data distribution examined in the first year laboratory, the other (Gaussian distribution) is investigated in Experiment 4.

The Poisson distribution is the limiting case of a “binomial distribution” when the number of possible events is very large and the probability of any one event is very small. The normalised distribution is given by

x!

ex

P(x)

[2]

where P(x) is the probability of obtaining a value x, when the mean value is μ. The standard

deviation for a Poisson distribution relates to the mean value and is given by σ(x) = .

This distribution is unlike the normal or Gaussian distribution in that it becomes highly asymmetrical as the mean value approaches zero.

Counting experiments: the “signal to noise” ratio In all counting experiments*, the “quality” of the data is expected to “improve” with increasing counting time and counts. This can be understood as follows: the mean number of counts in the experiment, μ, is the “signal” whilst statistical variations in this signal are represented by the standard deviation σ(x) and can be thought of as “noise”.

In Poisson statistics σ(x) = therefore the signal/noise = / , i.e. the ratio

increases with the square root of the number of counts. This is an often quoted and very important finding for understanding and designing experiments.

Put another way, if in a particular counting period an average of N counts are obtained, the associated standard deviation is N (ignoring any errors introduced by timing uncertainties, etc). Clearly, the larger N the more precise the final result. For a given source and geometrical arrangement, however, N can be increased only by counting for longer periods of time.

* Counting experiments are wide ranging. For physicists, counting photons to acquire a spectrum (such as that emitted by a star) is a relatively common task that comes in this category but even the number of letters sent by Einstein in set intervals has been analysed in this way.

67

2.3 Background radiation Part of this experiment involves measuring background radiation. This background level has many sources including long lived terrestrial radioactive species, cosmic rays and remnants from nuclear experiments. For most people the most significant source is due to radon gas formed as part of the decay series of uranium.

2.4 Philip Harris Protactinium Generator Protactinium234 has a half-life of approximately 70 seconds, and is suitable for the observation of radioactive decay. This isotope is one of the products from the U238 decay series, part of which is shown below.

low-

energy

high-

energy

23892U ––——› 234

90Th ––——› 23491Pa ––——› 234

92U

4.5x109

years

24 days 72 secs

To achieve isolation of Pa234, a less dense, water immiscible, organic liquid is added to a solution of a Uranium238 salt in concentrated Hydrochloric acid. Protactium234 is soluble in this organic layer. When the liquids are shaken and they are mixed together, the Pa234 is extracted by the organic solvent. When the mixture is allowed to settle, a physical separation into two layers occurs, where the Pa234 is now in the upper layer. The Pa234 decay is monitored, in this experiment by a Geiger-Muller Tube which is placed close to the top of the containment flask.

Several factors combine to make sure that the source can exhibit a Pa234 half-life: Thorium234 in confined to the low aqueous layer; beta radiation from this, and alpha

radiation from the Thorium230 can scarcely penetrate the flask. U234 and U238 also both concentrate in the aqueous layer: They are alpha emitters. Pa234 is a beta emitter, with a high enough energy spectrum to penetrate both the liquid

in which the source is sited, and the walls of the flask. Radiation from freshly born Pa234 nuclides cannot penetrate through from the bottom

layer.

2.5 The Geiger-Muller Detector A Geiger-Muller (GM) detector in its simplest form consists of a thin wire (the anode) mounted along the longitudinal axis of a cylindrical metal tube (the cathode). The tube is filled with a gas at low pressure and a potential difference is applied between the anode and cathode. Radiation entering the detector ionises the gas, producing, for each photon or particle entering, a burst of ions. These ions are accelerated to the electrodes by the potential difference and constitute an electrical current pulse. Successive pulses are recorded in a counter unit. Beta-particles are readily detected by a GM detector. Most alpha-particles cannot pass through the detector window. Gamma-rays are so penetrating that only a small, but constant, fraction of those entering the tube actually interact with the gas and are detected.

68

Figure 2: Schematic diagram of Geiger-Muller characteristic

For a fixed radiation rate the number of pulses detected depends mainly on the potential difference between the electrodes as shown in figure 2. As the potential difference is increased from a low value the pulse rate increases until the potential difference reaches a range over which the pulse rate changes very little. This is called the (Geiger) plateau. At higher voltages a continuous discharge occurs. The usual recommended operating potential difference for a detector is approximately half way along the plateau. However, not being too close to the extremes of the plateau will suffice.

Wider Applications The mathematics of radioactive decay is common to many areas of physics, such as

the charging and discharging of capacitors Counting experiments and their statistics are widespread in all sciences.

69

Experiment 9: Rotational motion and Moment of Inertia (MoI) with a torsion pendulum

Safety: This experiment makes use of a relatively long thin steel rod. Care should be taken to ensure that it is positioned below eye level and does not point towards the (eye of) the user.

Equipment List: Outline The physics: A torsion pendulum is used to illustrate some of the concepts associated with rotational motion (motion around an axis). In particular the importance of the shape that is rotating is considered via its “moment of inertia” (or “rotational inertia”). Measurements are used to reveal the (unknown) internal structure of a hollow spherical body (a hockey ball). Experimental techniques: This experiment provides a good example of the process of establishing a scientific technique. A test phase (in which known samples are measured) characterises the system (i.e. calibrates it and determines its accuracy and precision) before it is used in anger on real (unknown) samples.

Experimental skills Making and recording basic measurements; experimental observation; analysis of

straight line graphs. Establishing a scientific instrument by characterisation with known samples, before

employing it to measure an unknown sample. Detailed data analysis (of the hollow sphere arrangement).

Wider applications At the large scale consider the moon rotating around the Earth, the Earth around the

Sun, the Sun around the galaxy. The spring semester module (PX1225 Planets and Exoplanets) shows MoI measurements to reveal the internal structure of planets.

At the small scale consider electrons orbiting a nucleus. At the human scale consider almost every machine: the motor car; the electric motor;

water-pumps; windmills….

1. Background notes. School physics and mathematics courses discuss “translational” motion in which a body moves in one or two (or three) dimensions. By introducing “rotational” motion, in which a body turns about an axis (Resnick and Walker Chapters 10 and 11), any motion to be described. For example a ball rolling down a hill is a combination of both types. However, this experiment confines itself to illustrating the case of rotational motion and in particular focuses on the concept of the moment of inertia (MoI) the rotational equivalent of inertial mass. 1.1 The Torsion pendulum This is a variation on the mass on a spring experiment in which a vertical displacement of the mass from its equilibrium position results in simple harmonic motion (SHM) provided that the restoring force is proportional to displacement. Here the displacement is an angular displacement (rotation), θ of the mass and the restoring torque (rather than force) is due to torsion (twisting) in the spring.

The condition for SHM here is that the restoring torque is proportional to the angular displacement

70

� = − �� [1]

where κ (kappa) is a constant known as the torsion constant.

By comparison with SHM for a mass on a spring, the oscillation angular frequency of the system is expected to be

� = ��

� (radians per second) [2]

where I is the moment of inertia of the system.

1.2 Moment of Inertia, I (aka Rotational Inertia) The moment of inertia (MoI) of a body indicates how mass is distributed about its axis of rotation. It is a constant for a particular rigid body and axis of rotation (consequently the axis must be specified for the value to be meaningful). A point mass m a distance r from the rotation axis has a MoI of mr2. The MoI of a body can be found by considering it as a collection of i particles of mass mi at different distances ri from the axis of rotation. The MoI of the ith particle is given by ����

� and the total MoI inertia, I by the sum for all particles:

� = ����� [3]

This equation is extremely important generally (and to this experiment) for two main reasons: It indicates that masses further from the axis have a greater affect on MoI. It is the basis (via adding known MoI) of determining both an unknown MoI and the

torsion constant of the spring.

1.3 MoI of different shapes

Resnick and Walker (Principles of Physics, chapter 10) discuss how the MoI of continuous bodies (of uniform density) can be found by replacing the sum with an integral instead of a summation

� = ∫ ���� [4]

where dm is a mass element and r its distance from the rotational axis. Select results from this of relevance here are presented in table 1.

Table 1. Moments of Inertia for shapes of importance here (r represents distances or radii as appropriate, L represents length)

Shape Axis MoI, I Point mass Through point ���

Solid cylinder Through central axis 1

2���

Sphere Through centre Hollow,thin walled: �

����

Solid: �

����

Thin rod Through centre, perpendicular to length

1

12���

MoI are of the form n(mr2), n being different for different bodies. With this in mind the value n will subsequently be referred to as the “pre-factor”.

71

1.3.1 Spheres The different pre-factors in table 1 for thin walled hollow and solid spheres is of particular interest to this experiment. They indicate that as the wall thickness increases the pre-factor will decrease from 2/3 to 2/5, or alternatively that by measuring the pre-factor the wall thickness can be determined. Finding the pre-factor requires the MoI, mass and outer radius of the sphere.

The mathematics will be illustrated by starting with the MoI of a thin walled sphere and developing an integral for the general case and the specific case of a solid sphere.

The mass of a (thin walled) sphere of density, ρ, radius r and thickness dr is given by its density multiplied by its thickness, i.e. �4�����. Therefore an alternative form of its MoI is

� =�

������� [5]

From this the MoI of a thick walled sphere can be found using a straightforward integration

� = ∫�

�������

��

�� [6]

where r1 and r2 are the inner and outer radii respectively.

For the case of a uniform solid sphere of radius r, we have r1 = 0, r2 = r and � = �����

� so

that

� =�

������ =

���� , as expected.

1.4 Characterising a multiple component system

If, as here, the torsion constant of the spring and the rotational inertia of a component are unknown then experiments must be devised to find them. The approach is to add known rotational inertia (from equations 3 and 4 and table 1) to the system and find the effect on the frequency of the torsion pendulum.

If the unknown (starting) rotational inertia is I0 and for i additional bodies is �� = ∑ �� then the total rotational inertia is

� = �� + �� [7]

where I0 is fixed and unknown but the (multiple) contributions to IA are known. With this in mind equation 2 can be re-written

�� =�

�=

��

�+

��

� [8]

So that a graph of 1 ��⁄ versus �� will be a straight line of gradient 1 �⁄ and intercept �� �⁄ , allowing both �� and � to be found.

Note: with two unknowns a minimum of two measurements are required but in practice more will be taken to reduce errors and improve precision.

72

2. Experiment

2.1 Apparatus

High stability (triangular based) retort stand. Moment of inertia kit: main body; thin rod, 2x add-on masses, training hockey ball with screw thread. Oscillations are timed with a stop watch.

Table 1 Properties of components (errors represent range of values measured)

Body Mass /g

Main body (with 2 screws) 73.0±.1

Thin rod 16.55±0.15

Short cylindrical mass (with screw) 16.5±0.1

Hockey ball* (diameter 7.23±0.02 cm) 157±5

Spring 4.70±0.02

* Hockey balls are not all the same. Those studies here are made of spin cast PVC to give a thick walled sphere and hollow centre.

2.2 Thin rod, point masses and characterisation of the system

This experiment adds a thin rod (symmetrically/balanced) to the main body and then a matched par of masses to the rod at different distances from the rotation axis. Resulting changes in angular frequency illustrate the operation of a torsion pendulum (equation 2), the role of MoI and allow the torsion constant of the spring, � and the rotational inertia of the main body, Io to be found.

Note:

An assumption will be made that, as the spring is loaded and extended, its torsional constant remains constant (measurements have been made that support this).

In the experiment the periods of oscillation are the main measurement. It is suggested that 10 oscillations (periods) are measured 3 times. To start the oscillations rotate the main body by ~45o taking care to minimise any subsequent up/down motion. Do this for:

The main body (with 2 screws attached). The main body with the thin rod attached centrally. The above with the two small masses symmetrically (so that they are balanced) attached

at 8 distances from the axis.

Hint you will need to use Table 1 to calculate the MoI of the fixed rod and the masses at each of their positions.

Referring to equation 8, draw a suitable graph and use it to help determine values for I0 and κ and their associated errors. Hint you will also need to calculate

2.3 Hollow sphere (hockey ball)

Armed with the characteristics of the torsion pendulum (i.e. values for I0 and κ) the next step will be to use our (now established) scientific instrument to measure and learn something about an unknown object. The measurement here is quick and easy but data analysis will take some time.

73

Screw the hockey ball to the main body – there is no need to use more than ~half the thread on the screw. Hint: the measurement is easier if you keep the thin rod (without masses) attached.

Carefully measure the period of oscillation. Calculate the moment of inertia of the hockey ball (using equation 8), then its “pre-

factor” (� = � ���⁄ ). You may need to refer back to section 1.3 at this point.

2.3.1 Further data analysis Extracting meaning from data (like taking measurements) is a skill and one that undergraduates initially struggle with engage with:

It’s easier to simply present measurements and superficial analyses It is a step further from experiments that simply illustrate a piece of coursework Thought, effort and time is required and often it isn’t obvious what the course of an

analysis might be or where it might lead

This measurement, where a little data leads to a relatively large amount of analysis, is a good one to use to illustrate the analysis process and the way scientists question data.

As with most problem solving the biggest hurdle is overcome by starting/getting going, so it’s always best to start with something simple and easy:

Superficially consider the pre-factor (and its errors) for the hockey ball: Is it in the expected range (2/5-2/3) - and therefore reasonable?

If not then there is a problem that needs to be corrected: always start by checking for mathematical errors (everyone makes them). If there is still a problem it may be indicating that there are systematic errors – finding this may be a larger task.

If it is in the expected range where is it? Does it imply a very thin or thick walled sphere?

(This is not to pre-judge the result, just a way of thinking scientifically).

Quantitative analysis: to produce figure out a value for the wall thickness using equation 6.

There are many ways of doing this – but being unfamiliar with the measurement and analysis it is best to pick one that is intuitive, instructive, easily checked and preferably general.

Generate a graph of expected pre-factor (� = � ��� =⁄ � ����⁄ ) versus ratio of radii

(inner/outer).

This can be achieved by setting r2 to 1 and varying r1 between 0 and 1 – as r1 then represents the ratio. Equation 6 is then

� = ∫�

������� =

����(1� − ��

�)�

�� (0 ≤ �� ≤ 1) [9]

and the mass of the hollow sphere is

� =���

�(1 − ��

�) (0 ≤ �� ≤ 1) [10]

Tabulate values for I and m as a function of r1 and use this to generate your graph.

Check the graph. Does the pre-factor vary over the correct range (i.e. is the maths correct)?

74

Comment upon the regime where the experiment will be most (and least) sensitive to changes in wall thickness.

Compare the experimental pre-factor (and its error) with the graph to find the ratio of radii and then the inner radius (and their errors).

Don’t do the following if there isn’t time.

A further obvious stage (if you think about it) would be to calculate a value for the density of the material of the hockey ball. As it is already known that it is made of PVC the value can be compared with its accepted (range) of density values – this might add or subtract from confidence in the measurements. If the material was unknown the value might have suggested possible materials.

75

Experiment 10: Some end of semester fun physics Have you ever heard of Rube Goldberg, or Heath Robinson? Try typing them into Google to get a feel for what this experiment will be about. Many years ago, there was also a challenge on the TV called “the great egg race” which initially tasked teams of people to transport an egg without breaking it from A to B. This idea was later extended by “scapheap challenge” which did similar challenges on a grander scale in, you guessed it, a scrap heap. For the ideal Rube Goldberg type machine see: http://www.youtube.com/watch?v=qybUFnY7Y8w Whilst we are not intending anything quite on this scale we want you to be imaginative and transport a small ball (an egg would just be too risky) from one end of a workbench to the other, in as many interesting phases as possible, with an understanding of the basic laws of mechanics and motion that you have been working on all semester. You should try to include elements of linear and angular momentum, friction, (even flight if you think you can control it). However the sting in the tail, is that at the end of the table the ping pong ball must drop into a bucket on the floor, and when you have done this you must be able to show a good understanding (calculation) of the typical energy stored in the system for you to have done this. Estimate how much energy was needed to set the system going, how much potential energy was stored, how much energy was dissipated, and how much kinetic energy was left at the end when the ball plopped into the bucket. Marks will be awarded for the creativity of the contraption and also for the creative understanding of the physics. Within reason you are free to use whatever you can lay your hands on (beg and borrow). Check with demonstrators or the lab technician before using anything “unusual”. Trial and error is allowed, but bare in mind you have the necessary mathematical tools to have a first stab at calculating what you might need. You could just do a simple ramp at one end, at just the right angle to overcome friction losses, so that the ping pong ball just rolls into the bucket (to fast and it will miss). However, would this work everytime (errors), and where’s the fun in that. Labdiaries should be kept as usual, although this really will be a diary as you try things out and dismiss them as either wrong or not feasible. Failure is expected, and there is certainly no model solution. This is a competition!

76

Experiment 11: Report Writing. You have been tasked with writing a formal report on one of the experiments performed so far (numbers 4 to 9). This report forms 1/3 of your total module mark for PX1123 and is due in at 4pm on the last day of the semester. It is a compulsory element of the course and is developing skills that you will need throughout your degree and in your future careers This is likely to be your first experience of writing a scientific report of your own findings and is a skill that you are expected to have to work on. You will write 2 formal reports in each of your 1st and 2nd years, leading up to the presentation of the larger body of work of your 3rd (and maybe 4th) year independent research projects. To help you, we have provided some guidelines on page 15 of this manual, with an example report given on page 161. You will find on Learning Central – General Support module, some useful screencasts on the use of Microsoft Word and Excel and also Equation Editor as well as how to format a Formal Report . Don’t get hung up on word count, although we do provide a guidance. In scientific writing it is very important to say as much as is needed while using as few words as possible. These reports should be thorough, but repetition should be avoided. The entire report shoul be clear and straightforward, with good flow between sections. We strongly advise you to read the background material provided. Maybe also take a look at some scientific papers from common Physics journals; this will get you used to the form of language used and basic formatting rules. Your lab supervisors are available today to answer any questions you have about this task. ANY questions! You can ask us about using Word, use of language, how you format things, how you write an abstract, what referencs are, etc. Your report will be returned to you at the beginning of the second semester with a large amount of feedback. We will also run a special session on report writing in PX1223 – so you can see how important we think this skill is! Do your best.

77

HAPPY NEW YEAR! Experiment 12: Feedback and Report Writing. At the end of PX1123 you wrote a Formal Report on one of the experiments you had performed during the semester. These will have been thoroughly marked and lots of feedback comments given. These will be handed back to you at the end of this session. You will be required to write and submit another such report for PX1223 – it is expected that you will have taken on board the feedback given and can greatly improve upon your first attempt. This session is to assist with that process, so that you have a much clearer idea of what will be expected of your future formal reports (in 2nd year lab and your 3rd year project project). So part 1 is to reread the information given on page 15 and the screencasts available on Learning Central. In part 2 you’ll be given a mock report – absolutely full of common mistakes. You are to go through this, mark it and make a list of all the errors. Your lab supervisor will then discuss these with you. Check them against the advice given in section 1. For part 3, you will be given 3 real reports to mark and rank in quality order. And finally, you should reread your own PX1123 report and understand the feedback you’ve been given. Ask for explanation – we want you to do a really good job next time! If you are uncertain as to how to use certain word-processing tools (for example an equation editor), this is a good opportunity to ask. At the end of the session we will ask you to write a reflective statement on your performance and achievements during PX1123. This is a frequently used tool for assessing your own professional skills and progress, which you will use again. We have provided a template on Leanring Central such that you can see the key area on which you should comment. This piece of work will be assessed and forms your mark for this week.

78

Experiment 13: Optical Diffraction Safety Aspects: You must take great care when using the laser to avoid damage to your eyes. In no circumstances must you look along the main beam. You must also take care that specularly reflected beams do not enter your eye when you are adjusting the various components. Check with a demonstrator before starting the experiment.

Before coming to the lab, remind yourself about optical diffraction. Use an A level reference or read some of Chapter 36 (p990) of The Wiley Plus “Principles of Physics”.

Outline In optics, Fraunhofer (or far-field) diffraction is a form of wave diffraction that occurs when field waves are passed through an aperture or slit. In this experiment you will study quantitatively and qualitatively various diffracting objects and their diffraction patterns, by using a laser as a source of monochromatic light and a series of apertures, aligned on an optical bench.

Experimental skills Using a HeNe laser, and taking relevant safety considerations. Careful experimental alignment and set-up using an optical bench. Making use of observations and trial/survey experiments (as mentioned in Experiment

3) prior to taking detailed measurements.

Wider Applications Any real optical system (a microscope, a telescope, a camera) contains finite sized

components and apertures. These give rise to diffraction effects and fundamentally limit the obtainable resolution of any optical device. (There may be other optical imperfections too, such as scratches or misalignment.)

Thus, the resolution of a given instrument is proportional to the size of its objective, and inversely proportional to the wavelength of the light being observed.

An optical system with the ability to produce images with angular resolution as good as the instrument's theoretical limit is said to be diffraction limited. In astronomy, a diffraction-limited observation is achievable with space-based telescopes, of suitable size.

1. Introduction Diffraction is the name given to the modification of a wavefront as it passes through some region in which there is a diffracting object. The object is usually an obstacle or an aperture in an opaque sheet of material. Huygens’ Principle postulates that all points on the modified wavefront act as secondary sources of radiation. According to Figure 1, at any point P beyond the object the secondary waves superpose, or interfere, to give a resulting disturbance which is characteristic of the diffracting object. This resulting disturbance is usually referred to as the diffraction pattern of the object, although interference pattern would be a better name.

79

Figure 1: Diffraction through a slit

The form of the diffraction pattern also depends on the distance, D, of the observation plane from the object. Diffraction effects can be divided conveniently into two categories. (1) Near-field, or Fresnel diffraction, for which D is fairly small

(2) Far-field, or Fraunhofer diffraction, for which D >> a 2 , where a is the size of

diffracting unit and is the wavelength of the scattered radiation. In this experiment you will be concerned only with Fraunhofer diffraction effects. The experiment consists of studying, either quantitatively or qualitatively or both, various diffracting objects and their diffraction patterns.

2. Experimental set-up and adjustment of the apparatus

2.1 The laser The source of radiation is a 1 mW helium-neon (HeNe) laser which emits a coherent beam of light of approximately 4 mm2 cross-sectional area. Switch on the laser and adjust it so that the beam is travelling parallel to the longitudinal axis of the optical bench. Make a crude adjustment first by standing back and using your eye to judge how parallel the the axis of the laser is to the optical bench. Then, fine adjustment can be made by checking the beam position on a piece of white card as it is moved along the optical bench. Hold the white card in one of the holders provided and check that the beam strikes the card at the same point, which may be marked with a cross, wherever along the bench it is. Make adjustments using the vertical and transverse fine adjustment knobs on the laser baseplate. Don’t spend too much time doing this; if you’re having trouble, talk to a demonstrator.

80

2.2 Objects and holder Mount the three-jaw slide holder in a saddle positioned close to the laser. You are provided with a series of mounted 2” x 2” slides, etched into which are various diffracting objects. These slides are unprotected and must only be handled by their edges to avoid damage. Diffacting object(s) SLIDE 1 One-dimensional diffraction grating. SLIDE 2 Double slits SLIDE 3 A series of single slits of different widths. SLIDE 4 Two-dimensional diffraction grating. SLIDE 5 One-dimensional diffraction grating

3. Measurement of the width of the central peak Place slide 3 in the slide holder and mount it close to the laser at one end of the bench. Adjust it horizontally until the light is passing through slit C and displaying a clear diffraction pattern on the wall. Always look along the bench, away from the laser when making adjustments. Measure the distance, D, between the slide and the wall. Observe the pattern on the wall and sketch it, to scale, in your lab book. Is the pattern what you expect? What is the diffracting object? Accurately measure the width of the central peak, W. The peak width W is given by: W = Kan, [1] where K depends on D and , and a is the width of the slit (Figure 6.1). Repeat this measurement for slits D, E, F and G. Compare the width of the central peak with the slit widths, which are given in m , on the packet containing the slides. (Record all

measurements in metres!) Rearrange equation [1] so that a plot of W as a function of a will give you a straight line graph and, using appropriate graph paper, plot a graph to find the integer n. What do you think is the relationship between K, D and ? (Hint: use dimensional analysis to work it out and then refer to the literature to check the correct equation.) 4. Determination of the wavelength of the laser light Now use SLIDE 1 to obtain the diffraction pattern as illustrated in Figure 2. Using the travelling microscope and the Rayleigh mean method (if in doubt, ask a demonstrator), determine the repeat distance d of this one-dimensional grating. Place the slide in the slide holder so that the grating is illuminated by the laser and the diffracted beams lie approximately in a horizontal plane. Maximise the size of this pattern so that you can easily

81

determine the zeroth order (centre) and as many higher orders as possible. Sketch and describe the pattern. Now, by careful experimental measurement it should be possible to determine the wavelength of the laser light. The wavelength of the light from the laser is given by

m

d msin

, [2]

where the angle θm is indicated in Figure 2

Figure 2: Defining d and θm

Because θm is small, D

mxmm

)(tansin , and [2] becomes

m

mx

D

d )( [3]

Note x(m) is the distance between the centre of the pattern and the mth diffraction spot. Rearrange the equation to plot a suitable straight line graph in order to determine , the wavelength of the HeNe laser. Check that your answer is sensible! 5. Two dimensional grating

SLIDE 4, is a two-dimensional diffraction grating. Use any convenient diffraction method to find the ratio of the repeat distances in the two principal directions. Remember to sketch your observations and discuss.

82

Experiment 14: Propogation of Sound in Gases. Note: This experiment is performed in the dark room. SAFETY ASPECTS: MAKE SURE THAT THE ROOM FAN IS SWITCHED TO EXTRACT AND IS WORKING.

Outline

The speed of sound is commonly used to refer specifically to the speed of sound waves in air, although the speed of sound can be measured in virtually any substance and will vary. The speed of sound in other gases will be dependent on the compressibility, density and temperature of the media. You will investigate these dependencies by studying the sound waves set up in various gases contained in a gas cavity.

Experimental skills Observation of longitudinal waves. Understand the use of a microphone as an acoustic to electric transducer. Hence using an oscilloscope to study non-electrical waves. Careful use of gases and gas cylinders.

Wider Applications In dry air at 20°C, the speed of sound is 343 metres per second. This equates to 1,236

kilometres per hour, or about one kilometer in three seconds. The speed of sound in air is referred to as Mach 1 by aerospace professionals (i.e the ratio of air speed to local speed of sound =1).

The physics of sound propogation, reflection and detection is used extensively for underwater locating (SONAR), robot navigation, atmospheric investigations and medical imaging (Ultrasound).

The high speed of sound is responsible for the amusing "Donald Duck" voice which occurs when someone has breathed in helium from a balloon!

1. Introduction The speed of propagation of a sound disturbance in a gas depends upon the speed of the atoms or molecules that make up the gas, even though the movement of the atoms or molecules is localised. The r.m.s. speed of molecules of mass m in a gas at Kelvin-scale temperature T is given by;

2

1

21

2 3

m

kTc ,

where k is the Boltzmann constant. The sound is not propagated exactly at the speed c21

2

but at 2

1

3

times it, where is the ratio of the principal heat capacities of the gas.

Thus

Csound =2

1

m

kT [1]

Measurement of csound for known T and m therefore enables to be determined1.

83

In this experiment the speed of sound in gaseous argon, air (mainly nitrogen) and carbon dioxide is measured by analysing the standing waves in a cavity.

2. Experiment 2.1 Apparatus The standing wave cavity is shown schematically in Figure 1.

Figure 1: Standing wave cavity

The loudspeaker, driven from an oscillator, directs sound into the tube; standing waves are obtained by adjustment of the piston and detected by the microphone insert at the end of the tube. The output from the microphone is amplified and displayed on the oscilloscope. Ensure that the amplifier is turned off when you have finished this experiment. Consider and write down the relationship between the length of the tube and the wavelength of sound for standing waves in closed and open tubes. Revise these expressions having considered this material using reference 2 or another source. Should you treat your equipment as having two closed ends or one open and one closed? Why? Show that the length of the tube L is related to the wavelength as L = λ/4, 3 λ/4, 5 λ/4, 7 λ/4

i.e. 4

12

nL , where n is an integer .

Note. The volume of sound coming from the speaker should be made as small as possible. Use the most sensitive Volts/Div setting on your oscilloscope. 2.2. Experimental procedure There may be traces of carbon dioxide in the tube from the previous experiment. This must be removed by pushing the piston in and out of the tube over its full travel several times.

Switch on the oscillator, and set it to give a sound at 1000 Hz. Find the approximate positions of the maxima in the signal amplitudes. Plot the signal amplitude as a function of piston position for all the accessible maxima (you will need to select a suitable step size). Now plot the piston position for each maximum on a graph and deduce the wavelength

from the gradient. Calculate csound from the relation csound = f, where f is the frequency of the sound. Repeat the measurement for a number of other frequencies up to 5000 Hz. Consider whether there is any significant variation in your results, and attempt to account for it. Record the atmospheric temperature. Consider what affect the temperature might have on the measured speed of sound.

84

Repeat the experiment at one of the higher frequencies with the monatomic gas argon in the tube. Before attempting this, liaise with the demonstrator, who will arrange for the supply of the gas from the gas cylinder. Repeat the measurements at one frequency with carbon dioxide in the tube. Note any differences in the quality of the signal obtained. Why does this happen?

Use your results to calculate the value of , the ratio of the principal specfic heats of each of the three gases, from equation [1]. In equation [1],

k = Boltzmann constant = 1.38 × 10-23 J K-1

T = temperature in Kelvin

m = mass of one gas molecule i.e. relative molecular mass × 1.66 × 10-27 kg

The relative molecular masses of argon, nitrogen and carbon dioxide are 40.0, 28.0 and 44.0 respectively.

Tabulate the values of you obtain, together with the values given by the kinetic theory of gases.

3. References 1 H.D. Young and R.A. Freedman, “University Physics”, Pearson, San Francisco, 2004, p547 2 Resnick & Walker, “Principles of Physics”, Wiley edition 9, p457.

85

Experiment 15: Measurement of e/m. Introduction This experiment, devised by J.J. Thompson in 1897, allows the ratio of the charge, e, of an electron to its mass, m, to be measured using a cathode ray tube. This is done by producing a beam of electrons (so-called cathode rays) in the form of a narrow ribbon from an electron gun in an evacuated glass bulb. The electron beam is intercepted by a flat mica sheet, one side of which is coated with a luminescent screen and the other side is printed with a centimetre graticule. By this means the path followed by the electrons is made visible. There are two basic methods by which e/m may be determined with the cathode ray tube. They are both based on the equations describing the forces exerted by electric and magnetic fields on moving charged particles. You will try both methods. In both methods, the beam of electrons emitted by the filament passes from right to left to strike the mica screen. We need an expression for the speed, v, of the electrons in terms of the accelerating voltage, Va, between the filament and anode. If the electrons are emitted

from the filament with zero kinetic energy and move in a good vacuum, their kinetic energy is just given by

aeVmv

2

2

(1)

so that v can be found. We shall use this expression later.

Figure 1: Schematic diagram of apparatus

86

Take care! High voltages and delicate evacuated glassware are used in these experiments.

PLEASE READ YOUR "CODE OF PRACTICE FOR TEACHlNG LABORATORIES" SHEET

Method I - Electrostatic and Magnetic Deflection In this method, the lower deflector plate is connected to the point marked I in Figure 1. A magnetic field B is applied with "Helmholtz coils" (described below). If this magnetic field points out of the plane of the diagram, there will be a downward force on the electrons (use Fleming's left-hand rule) equal to BevFmagnetic

where e is electron charge and v is their speed. At the same time, by connecting the plates so as to put a voltage VP across them (see diagram) an upward electrostatic force can be

applied to the electrons, equal to

d

eVEeF Pelectric

where E is the electric field between the plates and d is their distance apart. In this experiment, E and B are adjusted so that there is no net deflection of the electron beam, so that the magnetic and electric forces must balance:

Bevd

eVP

and this gives, with equation (1), an expression for e/m

22

2

2 dVB

V

m

e

a

P

In fact, with the connections as shown, because the lower deflector plate is connected to the cathode while the upper plate is connected to the anode, the plate voltage is equal to the accelerating voltage, VP = Va, so that the previous equation simplifies to

222 dB

V

m

e a

87

Procedure For a range of anode voltages, adjust the current through the Helmholtz coils to reduce the electron deflection to zero. The magnetic field in each case is calculated as described below. Tabulate your values of Va, I and B. Plot Va against B2 and hence determine e/m. Estimate

the precision of all your measurements and results. What do you think are the main sources of error? Your graph should, of course, be a straight line passing through the origin . Comment on any deviation from this. Method II - Magnetic Deflection onlv In this method, the lower deflector plate is connected to the point marked II in Figure 1 This means that the deflector plates are effectively not used in this experiment. If no compensating electric field is applied, the electron beam will be deflected into a circular path of radius r. Equating the magnetic force causing the deflection to the centripetal force gives

r

mvBev

2

Combining this with equation (1) therefore gives

22

2

rB

V

m

e a

The advantage of this method is that it does not depend on deflection plates. It is very difficult to make deflection plates which have a sufficiently uniform electric field between them, and this leads to a systematic error in the determination of e/m. The only disadvantage of using this formula is that the value of r must be measured. To do this you can use the following relation for circles passing through the origin (which is at the exit aperture of the anode) and the points (x, y) on the graticule:

y

yxr

2

22

(Note: The origin of the graticule in some tubes is not exactly at the anode and a correction should therefore be made). Derive the above equation. Procedure As in the first method, choose several values of anode voltage. It is then easiest to adjust the current through the Helmholtz coils to produce a particular, easily measurable, radius of the electron beam path. For example, you could make the beam always pass through the

88

point (10.0, 2.6) cm. The magnetic field is calculated as described below. Note down the values of x, y and r and tabulate your values of Va, I, B. Estimate the precision of all

your measurements and results. Plot Va against B2. Choose another value of r and repeat.

Repeat for further positive and negative values of r (To get both positive and negative deflections, you will need to reverse the Helmholtz coil current). Calculate e/m for each r and compare and comment on your results. Helmholtz Coils The magnetic field acting on the electrons is provided by a so-called Helmholtz pair of coils each of a radius R, with their centres separated by a distance equal to their radius R. Such a configuration gives a substantially uniform magnetic field in the central region of the coils. The magnetic field B can be calculated from the formula

R

NIB 0

2

3

5

4

or

R

NIB 0716.0

where 0 = 1.26 x 10-6 TmA-1 (or 4 x 10-7 henry metre-1)

N = number of turns on each coil (320 turns of 22 swg enamelled copper wire in this case). I = current through the coils in ampere. The mean coil diameter is 13.6 cm in this case, so R = 0.068 m. The start of each coil is connected to the 4mm socket (A) on the side of the coil bobbin, and the finish to the 4mm socket (Z). For this experiment, in order that the field of the coils should add, connect the power supply to sockets A, with sockets Z interconnected. DO NOT EXCEED A COIL CURRENT OF l.5A FOR MORE THAN 10 MINUTES. FOR LONGER PERIODS OF TIME, DO NOT EXCEED 1.0A.

89

Experiment 16: Variation of Resistance with Temperature. Safety Aspects: In this experiment you will use the cryogen liquid nitrogen (boiling point 77.3K). Please ensure that you read the safety precautions, write a risk assessment AND seek the assistance of a demonstrator before using this.

SAFETY PRECAUTIONS IN THE HANDLING OF LIQUID NlTROGEN Avoid contact with the fluid, and therefore avoid splashing of the liquid when transferring it from one vessel to another. Remember that when filling a "warm" dewar, excessive boil-off occurs and therefore a slow and careful transfer is necessary. Do not permit the liquid to become trapped in an unvented system. If you do not wear spectacles, safety glasses (which are provided) must be worn when liquid nitrogen is being transferred from one vessel to another. FIRST AID If liquid nitrogen contacts the skin, flush the affected area with water. If any visible ''burn" results contact a member of staff.

Outline All materials can be broadly separated into 3 classes, according to their electrical resistance; metals, insulators and semiconductors. This resistance to the flow of charge is temperature dependent but the dependence is not the same for all material classes, because of the physical processes involved. In this experiment you will determine the behaviour of electrical resistance as a function of temperature for a metal and a semiconductor. You will confirm the linearity or otherwise of these behaviours.

Experimental skills Ability to keep a clear head and organize a one-off experiment, paying careful

attention to safety aspects. Make and record simultaneous measurements of a number of time-varying quantities. Determine realistic errors in these quantities and combine them. Gain experience of liquid cryogens. Fit measured data to linear, polynomial and logarithmic expressions.

Wider Applications Many branches of physics and its applications involve the study and use of materials at

cryogenic temperatures (those below ~ 150K). By understanding the temperature dependence of material behaviour, we can use it to our advantage.

Modern imaging and communication systems rely on the sensitive, noiseless and reproducible detection and transfer of electrical information. This is often achived by using cooled semiconductor devices.

Some materials become superconducting at cryogenic temperatures (i.e a temperature somewhat above absolute zero). This phenomenon has found application in Medical imaging (MRI scanners depend on the huge magnetic fields achievable only by using superconducting coils); Astronomical imaging (superconducting detectors are used to count 13 billion year old photons) and transport (MAGLEV trains).

90

1. Introduction In this experiment you will investigate the variation of the resistance of: 1) a semiconductor (a thermistor); 2) a metal (copper) in the temperature range from ~ 120 - 290 K. For a metal the following equation [1] describes the linear behaviour of resistance R with temperature T. R(T)= R273(1 + (T-273)) , [1] Where R(T) is the resistance at temperature T (in Kelvin), R273 is the resistance at 273K and is a constant known as the temperature coefficient of resistance, which depends on the material being considered and will vary slightly with the reference temperature (273K here). However the behaviour may be more closely described by a 2nd order polynomial fit. RT = R273 {1 + (T-273) + (T-273) 2}, [2] where is another constant. For a typical intrinsic semiconductor the electrical resistance obeys an exponential relationship with temperature. It takes the form of equation [3] . RT = a eb/T , [3] where RT is the resistance at T and a and b are constants. By using equations [1], [2] and [3], you are to find suitable graphical ways to verify or disprove these relationships. You may use Excel (or another plotting package familiar to you) to plot your data, BUT remember to take care with axes, apply suitable error bars and think about what your results mean. 2. Experiment 2.1 Apparatus The metal you will test is in the form of a coil of fine wire. The semiconductor is a thermistor. Both of these are attached to the top of a copper rod. They are held in good thermal contact with it by a low-temperature varnish. The temperature of the specimens can be reduced by immersing the copper rod to various depths in liquid nitrogen, which boils at 77.3 K. The liquid nitrogen is poured into a Dewar flask contained in the box which supports the copper-rod assembly. The liquid-nitrogen level is gradually increased by adding liquid nitrogen through the funnel. An insulating cap is provided which, when placed over the top of the rod, thermally isolates the specimens from the surroundings and allows their temperature to fall to a value determined by the depth of immersion of the rod in the liquid nitrogen.

91

The temperature of the specimens is measured with a thermocouple. This consists of two junctions of dissimilar metals arranged as shown in Figure 1. If the two junctions are at different temperatures an e.m.f. is generated which, to a good approximation, is proportional to the temperature difference between the two junctions. By calibrating such a thermocouple, temperature differences can be determined by voltage measurements and these can be used to measure temperature if one standard junction is held at a well-defined fixed temperature.

Figure 1: Representation of back-to-back thermocouple junctions and circuit

In this experiment we use a copper-constantan thermocouple. One junction of this is embedded with the specimens in the varnish; the other, the standard, is kept at 77.3 K by immersion in liquid nitrogen contained in a separate Dewar flask. You will calibrate the thermocouple with the standard junction in liquid nitrogen while that attached to the metal rod remains at room temperature. The resistances of the copper and thermistor are read from multimeters suitably connected. The voltage across the thermocouple is also read by a multimeter. Ensure you can read all 3 scales simultaneously. 2.2 Calibration of the thermocouple Connect a multimeter to the appropriate thermocouple terminals on top of the rod. Immerse the free junction in liquid nitrogen and record a voltage. Take another voltage reading when the junction is at room temperature. You can now calibrate the thermocouple scale by assuming that the voltage is linearly related to temperature difference. (This is not strictly true but will suffice for our purposes.) Check your calibration with a demonstrator and ensure that you know how to use the thermocouple as a thermometer for the rest of the experiment. 2.3 Resistance measurements The magnitudes of the coil and thermistor resistances will be determined using multimeters set to the ohms range. Measure RC (the resistance of the copper coil) and RTh (the resistance of the thermistor) at

room temperature.

92

Place the insulating cap on top of the rod and start to add liquid nitrogen through the funnel. Note the readings on the 3 multimeters (thermocouple voltage, Rc and RTh).

Gradually add more liquid nitrogen and repeat .The object of the experiment is to obtain as many measurements of Rc and RT as possible over as wide a temperature range as

possible. Remember to ensure that you have a simple diagram of your apparatus that would allow you to set the experiment up again. Experimental Notes

You must work quickly and efficiently if you are to obtain sufficient experimental points on the graphs

Handle the Dewar flasks carefully. DO NOT touch the copper rod when it has been immersed in liquid nitrogen. If

you do, you may freeze to the cold metal and give yourself a severe burn You will find that there will be little change in temperature of the coil and the

thermistor when liquid nitrogen is added initially, but take care not to add too much liquid nitrogen at any one time or a large temperature drop may result. Once the rod has been cooled, it is not easy to raise the temperature again in the course of the experiment. This is a one hit expereiment!

The lowest temperature you are likely to reach will be at best ~ 120 K. Make notes in your lab diaries of anything that happens during the experiment,

e.g. where you note a change of range on the multimeter. Make a note in your lab diary of the specific pieces of equipment that you have used.

3. Data analysis Plot suitable graphs of your data and investigate the validity of equations [1] and [2] for the metal and equation [3] for the thermistor. Finding values of , , a and b. You may use a computer package (Excel is recommended) to fit the equations but be careful to check your axes, show error information and quote gradients and results to a sensible number of significant figures. Does the variation of resistance in a metal vary linearly with temperature? Which equation gives the best fit to the data? What do you notice about the variation for a semiconductor? Is the exponential fit of equation [3] good enough? How might the experiment, errors in the data, or your experimental method be improved?

93

Experiment 17: Resistive and reactive impedances in RC circuits

Apparatus: GW Instek GDS-1022 oscilloscope, Thandar TG 102 Function generator, ~0.022 μF capacitor, ~4.3 kΩ resistor, breadboard, various leads and wires. Fluke multi-meter made available to precisely measure value of resistance.

Outline An introduction to the behaviour of time varying electronic signals in electronic circuits

involving both reactive and resistive impedances, using a series combination of a resistor and a capacitor.

The investigation uses an oscilloscope to examine voltage signals for the capacitor coupled/high pass filter arrangement. This allows the frequency dependence of the phase angle between current and voltage and the filter performance to be found.

Experimental Skills Reinforcement of the use of coaxial leads and circuit construction with breadboards. Reinforcement of the use of oscilloscopes for measuring time varying electrical

signals. Introduction of oscilloscope techniques for measuring the phase differences between

signals in both Y-t and XY modes.

Wider Applications Resistor-capacitors combinations are widely used in electronic circuits as frequency

filters to let through (or pass) either low or high frequency signals, i.e. as low or high pass filters respectively.

With inductors in “LCR circuits”, resonance behaviour can occur described by mathematics that is analogous to mechanical forced, damped oscillatory systems: This behaviour is extensively covered in 1st year maths and in 2nd year physics labs. These tuning circuits are what was at the core of the wireless (radio) communication revolution.

The visualisation of orthogonal oscillating signals, as seen during this experiment with the oscilloscope in XY mode, has very close parallels with the different possible polarisation of light: the analogies of linear, circular and elliptically polarised light are all produced in this experiment.

1 Introduction Capacitors, like resistors “impede” current flow, although not in the same way: A steady voltage applied to a capacitor causes a charge to build up on the plates of a

capacitor eventually preventing further current flow, whilst alternating currents can flow on and off the plates; hence low frequency signals are impeded but high frequency signals are not.

Whereas resistors heat up and so dissipate electrical power (I2R) capacitors do not: hence their impedance is said to be “reactive” rather than “resistive” (this is the same for inductors whose impedance is also reactive).

Whereas current and voltage are in-phase across a resistor they are 90° out of phase across capacitors (and inductors).

It is the frequency dependence in alternating current (ac) circuits that has lead to capacitors being widely used in electronic circuits. In analogue filter networks, they help remove high frequency signals from dc power supplies or remove unwanted direct current (dc) voltages from ac signals. In resonant circuits they can be used to ‘pick up’ particular frequencies.

94

1.1 Impedances of resistors and capacitors The above considerations lead to a distinction: the general term for something that impedes current flow is called an “impedance” (Z); whereas the impedances of capacitors (and inductors) are called reactive (X) and of resistors are called resistive (R). In all cases impedances are measured in ohms and current and voltage are related by

� =

� [1]

In addition, of particular relevance here is that the total impedance of a circuit containing series combination of a resistor (R) and a capacitor (XC) is given by:

� = � + �� [2]

Resistors: A reminder is probably not needed however, the relationship between the current I through and voltage V across a resistor is I = V/R. If the voltage is varying sinusoidally (i.e. � = �������, where �(= 2��) is the angular frequency) then:

� =

�������

� [3]

Hence current and voltage are in phase.

Capacitors: The equation that describes the behaviour of capacitors is � = �� where Q is the charge on the plates of the capacitor and C is the constant of proportionality to the voltage across it and is known as its capacitance. In a similar fashion to a resistor the magnitude of the charge on the capacitor varies in phase with the voltage. However, here it is the phase difference between current and voltage that is of interest.

Current is given by

� =

��

��= �

��

��= ��������� [4]

Hence the current leads the voltage by 90° and the magnitude of the reactance is given by

|��| =|�|

|�|=

|�������|

|���������|=

1

�� [5]

i.e. the reactance of a capacitor decreases with increasing frequency.

1.2 Series RC circuit theory. Capacitors and resistors often occur in circuits together. In these “RC circuits” the capacitive reactance and resistance combine to produce an overall circuit. The study of current and voltage in a series combination of a resistor and a capacitor is the subject of this experiment. Consider a sinusoidally varying voltage source connected to a resistor and capacitor in series as shown in figure 1. The instantaneous voltage across both components must equal the input voltage and the instantaneous current at all parts of the circuit must be the same hence equation [6]

��� = �� + �� = �(� + ��) [6]

95

Figure 1 Series combination of a resistor and capacitor and the voltages across them.

However, due to the phase differences the voltage across each component peaks at different times and therefore it is incorrect to add their amplitudes. To understand and express what is happening it is useful to make use of complex number representations. (The alternative, the use of phasors and phasor diagrams, is briefly considered in the appendix). An Argand diagram of impedance, Z as shown in Figure 2.

Figure 2 Argand diagram (similar to phasor diagram) for the impedance of a series RC circuit. The angle � is the phase

angle difference between current and input voltage, Vin.

The resistive impedance, R is on the real axis as current and voltage are in phase (experimentally this is very important – measuring the voltage across any resistor gives the phase of the current and, if R is known its magnitude).

By contrast the reactive impedance of the capacitor is given by:

�� = −�

�� [7]

in order to be consistent with the current (which is the same at all parts of the circuit) leading the voltage across the capacitor by 90°.

Using equation 2 the impedance of the series combination of R and C is ���� = � −�

��

The magnitude of the total impedance is given by |����| = ��� + ��

���

R

C

VR

VC

Vin

R

ImZ

-j/ωC

Ztot

=R-j/ωC

ReZ φ

96

1.3 The capacitor coupled, high pass filter arrangement A common practical use of RC circuits is as “frequency filters”. A voltage signal from one part of the circuit is passed to the filter (as the filter input signal, Vin) and a different signal (filter output, Vout = VR) is passed onto the next part of the circuit. With a series combination of one capacitor and resistor Vin is applied across both components whilst Vout is taken from either the capacitor or the resistor. Only the latter case will be investigated in this study and is shown in figure 3. It is known as the “capacitor coupling arrangement” as the capacitor connects to the circuit that precedes it.

Figure 3 Equivalent arrangements for capacitor coupling/high pass filter.

In this investigation the input signal to the filter Vin will be supplied by a signal generator and both Vin and Vout will be monitored by an oscilloscope. This arrangement was chosen since, as discussed previously, the voltage across the resistor (Vout) is the same as, and so gives, the phase of the current. From the Argand diagram in figure 2 the phase of the input voltage signal must be between that across the resistor and capacitor. In addition, the phase angle between input voltage (across both R and C) and current is given by

���� =1

��� [8]

The amplitude of the output voltage can be found by considering the magnitude of the impedances and considering the circuit as a voltage divider: |����|

|���|=

��� + �1

����

=���

((���)� + 1)� �⁄

[9]

A fuller treatment of this is given in the appendix.

Filter characteristics as a function of frequency, remembering that RC is the time constant of the circuit, are summarised in table 1.

Table 1 Filter characteristics as a function of frequency

Frequency, � Output signal, |����| Phase angle, �

� ≪ 1 ��⁄ (low frequency) |����| → 0 � → 90�

� = 1 ��⁄ |����| = |���| √2⁄ � = 45�

� ≫ 1 ��⁄ (high frequency) |����| → |���| � → 0�

97

At low frequencies the impedance of the capacitor dominates and most of the input voltage is dropped across it, whereas at high frequencies the reverse is true. This is why the arrangement is known as a high pass filter: the input signal is only passed on faithfully (i.e. without attenuation) at high frequencies.

Aside: A “tweeter” is the loudspeaker in audio systems that is designed to generate high frequency sound (f > 2 kHz typically). High pass filters very similar to the one measured here are used to ensure that only the high frequencies are delivered to the tweeter.

2 Experimental

Using the prototype board, assemble the circuit in Figure 3 making use of three coaxial leads and connector posts and ensuring that: When connecting jump leads to the 4 mm posts ensure that some bare wire protrudes

from the post: a common cause of poor connections is pinching down on the wires’ insulation.

The earth of the three coaxial leads join at the same post (otherwise they will short out voltage signals).

The input and output signals are taken to Ch1 and Ch2 of the oscilloscope respectively. The function generator is set to sine wave and its “dc offset” is turned off.

The capacitor and resistor provided have nominal values of 0.022 F and 4.3 kΩ respectively. However, measure the resistor value with a multi-meter and use this later to find the value of the capacitor (the quoted tolerance on the value given is 10%).

With the circuit made up, get used to operating the oscilloscope again. Reminder: a summary version of how to use the oscilloscope can be found in background notes. But to start: Turn on the oscilloscope and when the GW Instek banner has disappeared press

“Save/Recall” then select “Default Setup” and finally press “Autoset”. Or you could simply press “Autoset” – but this may remember unsuitable previous

conditions.

Now: Adjust the signal generator to set an input signal (dc offset in off position) with a peak

to peak amplitude of ~3 V. Use the vertical adjustments on Ch1 and 2 so that they are both at 0V (the position

appears at the bottom left of the trace as they are being adjusted) to make phase and signal changes more obvious.

Check that the circuit is working as expected, i.e. that as the input signal frequency is varied the output signal size and phase vary roughly as described at the end of section 1.3.

Note: the same circuit arrangement will be used for all subsequent measurements. If you are unsure that it is working correctly check with a demonstrator.

2.1 Measuring the filter characteristics With the set up as above, and with the time base and y scales adjusted as appropriate perform measurements of frequency, f (and so period, T=1/f), Vin (although this isn’t adjusted it may drift so measure it), Vout and the lead or lag of one oscillation against the other, dT (and so the phase offset �).

98

Note: � = 360��

� degrees (or 2�

��

� radians)

As 1 period, T, corresponds to 1 cycle, 360°, 2π radians.

Do this as a function of frequency (take ~10 readings in the range 200 Hz to 8000 Hz) recording the results in a suitable table.

Most measurements are made by the oscilloscope and can be read from its display using its “measure” facility (use peak to peak amplitudes for voltages), but for dT use the two X cursors (and then convert to radians or degrees as required). It will be necessary to toggle between “cursor” and “measure”.

Make plots of phase angle and |����|/|���| versus frequency, use these to find the condition � = 1 ��⁄ and so determine the value of the capacitance (see table 1).

Using the phase angle data plot a suitable straight line graph (see equation 8), use this to determine the capacitance and compare the value with that above.

2.2 Using the oscilloscope XY mode for determination of filter characteristics. Here the x axis is not time dependent, instead one of the two channel inputs produces x deflections and the other y. This mode will be used to repeat the measurements of the previous section, but first some explanation.

The movement of the spot on the screen is then described by

� = ����(��); � = ����(�� − �) [10]

where is the phase angle between the 2 inputs. In general this represents an ellipse, as shown in Figure 4, although depending on the phase angle the ellipse may appear when in-phase as a straight line through to, with A = B and 90° out of phase, a perfect circle. Such plots are known as Lissajous plots or figures.

Figure 4: Elliptical trace for the measurement of phase angle. Also shown dotted are straight line (�=0) and circular

(� = 90°, � = �) traces.

Understanding the XY mode To understand what you are seeing do the following (you will almost certainly need to get help from a demonstrator to get you started here): Sketch one period of a time varying sine wave and a cosine wave, both of amplitude 1,

in your diary. On both mark 10 reasonably evenly time spaced points and number these from 0 to 9 (point at start and end of cycle numbered 0 and 9 respectively).

99

Draw an XY plot with scales -1 to +1 in both X and Y. On this and for the case when both X and Y vary sinusoidally, plot out the time progression of the display using the numbers 0 to 9 as markers (rather than x’s or o’s). This is a Lissajous figure for the case of signals of the same frequency and in phase.

Repeat for X a sine wave and Y a cosine wave. This is the case of X and Y 90° out of phase. Using the time progression note whether the resulting (hopefully circular) trace was drawn out in a clockwise or anticlockwise sense.

Analysis of plots such as figure 4 (to find both |����|/|���| and �).

To find � the line y = 0 (passing through A’,N’,O,N and A) through the ellipse is considered.

We have

� = ����(�� − �); So that �� = � [11]

Hence � = ����(��) = ����� = � = |�′| [12]

And ���� =�

�=

��′

��′ [13]

Here AA' is the difference length between the two extreme x values of the ellipse, and NN' is the length given by the intersection of the ellipse with the x axis. Using the cursors it is more convenient to obtain these from the oscilloscope trace than N and A.

If the input signal (Vin) to a circuit (here the signal from the signal generator) is applied to channel 1 (X) and the output signal (Vout) from the circuit (here from across the resistor) to channel 2 then from figure 4 we have:

|����|

|���|=

�=

���

���=

�ℎ2��

�ℎ1��

Where the pp subscript indicates peak to peak amplitude as the oscilloscope finds in its “measure” mode.

Measurements

To put the scope in XY mode press the “menu” button under horizontal and then select

XY.

Make measurements of |����|/|���|and phase offset � versus frequency.

Add this data to your earlier plots ($2.1) of phase angle and |����|/|���| versus frequency and comment on the agreement.

3 Conclusions

As part of your concluding remarks consider the relative merits of the different methods

for measuring phase offsets and determining C.

100

Appendix: Complex number treatment of output voltage across the resistor

We are dealing with a potential divider circuit in which (using complex ohm’s law)

��� = � �� −�

���

And, since the current must be the same in all parts of the circuit

� =����

�=

���

�� −�

���

Rearranged this gives

���� = �� =����

�� −�

���=

����

�� −�

���.�� +

����

�� +�

���

=���� �� +

����

��� + �1

����

The amplitude of the output (usually of most interest) is given by:

|����| = (��������∗ )� �⁄ =

⎜⎛(����)� ��� +

1�����

��� + �1

����

⎟⎞

� �⁄

=����

��� + �1

����

� �⁄=

������

((���)� + 1)� �⁄

Phasor diagrams

These are an alternative to the Argand diagram of figure 1 to represent such time varying

voltages and currents and are used in Halliday and Resnick (chapters 16 and 31). Figures

may appear very similar to figure 2, however, the vectors rotates anticlockwise with

constant angular velocity corresponding to the angular frequency of the quantity involved.

The length of the vectors is equal to the amplitude of the quantity and the instantaneous

value of a quantity is represented by the projection onto the vertical axis.

101

Experiment 18: Microwaves

Safety

Although the microwave power used in this experiment is very low students should take care not to look directly into the source when it is switched on.

The resistor mounted on the back of the transmitter does get hot after extended use.

Outline The properties of waves in general and electromagnetic waves in particular are examined by using microwaves of wavelength ~2.8 cm. The properties examined include polarization, diffraction and interference. The interference experiments are similar to those performed with visible light at much shorter wavelengths (and sound with similar wavelengths). However, the macroscopic wavelength of microwaves is exploited to reveal behaviour not readily accessible at short wavelengths, in particular phase changes on reflection and edge diffraction effects.

Experimental skills Experience of handling microwave radiation, sources and detectors. Experience of polarized electromagnetic radiation.

Wider Applications Microwave radiation is used in communications, astronomy, radar and cooking.

Mobile phones use two frequency bands at ~ 950 MHz and ~18850 MHz. Astronomy - the cosmic microwave background radiation peaks at λ= 1.9 mm. Microwave ovens use a frequency of 2.45 GHz wavelength of 12.2 cm. The oscillating electric field interacts with the electric dipole in water molecules so that they rotate, have more energy and so get “hotter”. Since water molecules in solid form cannot rotate ice is an inefficient absorber of microwave radiation.

The manipulation of polarization is an important way to exploit electromagnetic radiation. This is not restricted to plane polarization. For example “circularly” polarized light is exploited in the latest 3D films shown at cinemas.

Electromagnetic radiation detection is common to many branches of physics. For example with an array of detectors similar to the ones used here and some optics astronomical imaging becomes possible – this is a very active research area within this School.

Equipment List: Microwave generator, two detectors (point probe and horn), Multi-meter (using mV or V scale, depending on equipment), metal plates and grid.

1. Introduction The name “microwave” is generally given to that part of the electromagnetic spectrum with wavelengths in the approximate range 1mm - 100 cm (10-3-1 m). This compares with the visible region with wavelengths of 4 to 8 x 10-7 m. Microwaves therefore have a wavelength which is >20,000 times longer than light waves. Because of this difference it is easier in many cases to demonstrate the wave properties of electromagnetic radiation using microwaves.

102

1.1 Electromagnetic Waves An electromagnetic wave is a transverse variation of electric and magnetic fields as shown

in figure 1 and travels through space with the velocity of light (3 x 108 m s-1). Because it is a transverse wave it can be “polarized”, meaning that there is a definite orientation for their oscillations. As shown in Figure 1 an electromagnetic wave is composed of electric and magnetic fields oscillating at right angles. The direction of polarization is defined to be the direction in which the electric field is vibrating. (This is an arbitrary matter; the magnetic field could equally well have been chosen to define the direction of polarization). Plane polarized radiation means that the electric field (or the magnetic field) oscillates in one direction only.

Figure 1 The electric and magnetic fields in an electromagnetic wave. E is the electric field strength, B the

magnetic flux density. The wave propagates with a velocity of 3 x 108 m s-1

.

The microwave transmitter provided emits monochromatic plane polarized radiation. A normal light source is a mixture of many different directions of polarization so that its average polarization is zero.

An electric field is defined in terms of both an amplitude and direction and is therefore a vector. It is useful to think of polarized radiation in terms of vectors. The detectors of (microwave) electromagnetic radiation used in this experiment are polarization sensitive (some are not). In this case the relative orientation of the transmitter (and electric field) and the detector (receiver) is important and is illustrated in Figure 2.

Figure 2. Plane polarised radiation incident at an angle θ with respect to the sensitive direction of the

detector.

In Figure 2, if the amplitude of the electric field of the incident radiation is E0 the component that is experienced by the detector is E0cosθ. Some detectors give an output that is

θ

Electric field direction of polarisedelectromagneticradiation

orientation of polarisation sensitive detector

103

proportional to the amplitude of electric field, however many have an output proportional to the intensity, I (or power). Intensity is proportional to the square of the electric field, so for an aligned field and detector

I = I0 = kEo2

whereas at an angle, θ,

I = kEo2cos2θ = Iocos2θ.

From the above, the angular dependence of the signal is capable of revealing something about how the detector/receiver used operates. “Diffraction” and “interference” both relate to the superposition of waves and are essentially the same physical effect. Custom and practice dictates which term is used in a particular circumstance. The essential principles should be familiar to 1st year physics students and will not be repeated here.

2. Experimental

2.1 Apparatus: The Microwave Equipment

The transmitter incorporating a Gunn diode in a waveguide and a horn gives plane polarized* radiation and is operated at 10 V, fed by a power supply.

There are two receivers*, one is a feed horn receiver the other is a probe. The feed horn receiver is the most sensitive and is both polarization dependent* and

directional. The probe is non-directional, but is still polarization dependent and is less sensitive. The receivers are connected to a voltmeter on its mV range.

*The polarization of the transmitter and horn receiver is vertical if the writing on the back of the units is horizontal. The probe receiver placed supported by its stand on the bench is sensitive to vertically polarized radiation.

Important: Reminder: Do not look into the transmitter when it is turned on. Neither receiver should be placed nearer than 10 cm from the transmitter. Stray reflections are a big problem when undertaking microwave experiments. To

minimise these, the experiment should be carried out on the top level of the bench and all objects (bags, hands and arms etc) should be kept out of the beam whilst taking measurements.

2.2. Standing waves and the determination of wavelength To create a stationary (standing) wave a reflecting surface is placed in the path of a progressive wave to reflect the wave along its own path. The resulting waveform should be similar to that shown in Figure 3 where the distance between successive nodes (or antinodes) is half a wavelength.

104

Node

Antinode

Incident wave velocity c

Reflected wave velocity c

E_

/2

Figure 3. Depiction of the standing waves set up when a wave is reflected off a surface.

The (aluminium) reflector plate should be approximately 1 metre from the microwave source.

Place the probe in the region of the standing waves and move the reflector plate either towards or away from the transmitter. (A very similar experiment can be performed by moving the detector with the reflector plate fixed.)

The probe will pass through the wave form given in Figure 3 and when the probe is connected to the meter in the receiver it will display successive maxima and minima.

Determine the wavelength of the microwave, by recording the position of several maxima and plotting a graph of distance versus maxima (the slope will give a value for half a wavelength). Does the wavelength agree with the value written on the back of the transmitter horn?

2.3 Plane polarised electromagnetic radiation

This section consists of a number of experiments to reveal the behaviour of the microwave source and receivers/detectors as well as some of the properties of plane polarized radiation.

Plane polarization and receiver sensitivities Position the transmitter and horn receiver 0.5 m apart with both oriented for vertically

polarized radiation. Align the transmitter and detector by maximising the signal and make a note of the signal.

The polarization of the emitted radiation and polarized sensitivity of the receiver can be demonstrated by rotating the transmitter through 90o. Find the minimum possible signal and record it.

Repeat for the probe receiver and compare the properties of the two receivers. Return the transmitter and horn to their vertical position. Place the large metal grid

between the two, rotate it and observe the variation in the received signal. What effect does the grid have? Why?

Detection of polarized radiation: angular dependence Either by using the metal grid or by rotation of the transmitter, deduce the dependence of the measured power on the angle of polarization. (This may be quite tricky.) Find a suitable way of measuring the angle of rotation and vary this in 15 degree steps

from 0 o to 180 o. Record the measured signal. Tabulate the signal measurements along with the expected values for cosθ and cos2θ

dependencies. What do the results imply?

105

2.4 Demonstration of interference effects

This part of the experiment builds up a microwave analogue of the single slit optical interference experiments. By concentrating on the straight through beam the experiment complements optical diffraction experiments. The general arrangement is shown in Figure 4.

Figure 4. Schematic of the experimental arrangement for interference from a single slit (the transmitter is shown relatively much closer to the slit than is required)

The experiment is performed in four parts whilst keeping the distance between the front of the transmitter and the plane AA’ constant (at ~0.6 m). This will allow all results to be compared.

(i) No slits in place This section gives an indication of the spread of microwaves emitted from the source.

Position a 1 m rule on the bench top to provide an indication of position in the AA’ plane.

Moving the probe in 2 cm steps between measurements, take 8 measurements either side of the centre line, i.e. 17 measurements in all.

Plot the data. Note: The graph shows the distribution of microwave power in the “beam” emitted from the transmitter.

(ii) Single slit: variable slit width probe fixed in straight through position This section investigates the effect of slit width on the straight through beam.

Position the two large plates equidistant from the front of the transmitter and the plane AA’, with a slit width of 3 cm.

Keeping the centre of the slit on the line between transmitter and probe, take measurements as the separation of the plates (width of the slit) is increased in 2 cm steps up to ~21 cm and then in 1 cm steps up to ~35 cm.

Plot the data and compare with (i).

Note: The above results have all the hallmarks of interference.

(iii) Single plate: variable plate position, probe fixed in straight through position

transmitter

x

A

A'

to meter onreceiver

106

This section seeks to provide an explanation for the results found in (ii).

Position one large plate as above but with one of its edges directly in the line of sight between the source and the detector. Make a note of this position and then move it across a further 5 cm to obscure the detector.

From this starting position take readings as the probe is moved out of the beam. Take readings every ~2 cm for the first 10 cm and every 1 cm for the final 10 cm (20 cm movement in total). (You can always add more readings if you need to.)

Plot the data and consider whether two such single plates can explain the results in (ii).

Note: There is very little scattering of radiation behind the plate.

The origin of interference If all has gone well, the two plate/single slit the interference behaviour of the straight through beam can now be understood to arise from the addition of the effect of two single plates. The single plate behaviour is better considered to be an example of “straight edge diffraction” where the straight through beam from the emitter interferes with a secondary source of radiation reflected from the edge of the plate. As the plate is moved away from the centre line the path difference, between the straight through and reflected beams, increases. From this argument it might be expected that the first turning point, corresponding to a path difference of λ/2 (phase difference of π), would be a minimum, whereas clearly it is a maximum. This is explained by the reflection at the edge producing a (negative) phase shift in the re-emitted radiation.

If you have time, use Pythagoras theorem to determine the phase shift** caused by reflection at the edge. See Appendix at end.

** A simple reflection (as in 2.2) would be expected to result in a -π phase shift, however with this geometry the Gouy effect is reported to result in a further -π/4 phase shift giving a total of -3π/4.

(iv) Single slit diffraction pattern: fixed width This section seeks to illustrate the fundamental equivalence of light and microwaves by generating a (familiar) single slit diffraction pattern.

Position the two large plates as in (ii) but with a separation of 11 cm. Moving the probe in 2 cm steps between measurements, take 8 measurements either

side of the centre line, i.e. 17 measurements in all. Plot the data and compare the first minimum with its expected position (given λ = 2.8

cm).

(Note: Here due to diffraction, minima are expected at n = d.sin, where d is the slit width.)

107

Appendix The experimental arrangement is shown in figure 5 where the source is considered to be a point - a parallel beam would be more appropriate for a visible laser/edge arrangement. The distance from plane of sheet to the source and detector is the same.

Figure 5. Schematic of experimental arrangement for edge interference. The paths for microwaves travelling directly between source and detector and via the edge are shown.

The geometric path difference (found using Pythagoras) is 2δ where

LLd 2122 )( Extrema (i.e. maxima and minima) in intensity occur, taking into account the Gouy effect when:

8/32)(22/)1( 2122 LLdm where m is a positive integer. Note half wavelength path lengths give alternating max and min and so the “extrema”.

Detector

Metal sheet

Source

L

d

δ

108

Experiment 19: Group Easter Challenge A blank page because, as of September 2015, we hadn’t made it up yet…. But it will be fun and involve prizes.

109

Experiment 20: Air resistance

Note: You must keep a real time lab diary in the usual way and aim to finish all analysis within the 4 hours. Your lab book will taken in at the end of the 4 hour session.

Equipment: 3 muffin cases, 1 m rule, stopwatch.

Safety: Students must not raise themselves (unreasonably) off the floor to gain extra height and must perform the experiment in the first year laboratory.

Outline With only a reminder of the important physics, you are asked to determine as much as you can about a very simple system: muffin cases falling vertically through the air. Some students may have come across this experiment before, however it is demanding in terms of both experimental skill and analysis - do not underestimate it.

Experimental skills Making and recording basic measurements: heights and times (and their errors). Making use of trial/survey experiments. Careful experimental observation.

Wider Applications Planes, trains and automobiles are all designed to reduce air resistance in order to go

faster and/or travel more efficiently. The wider scientific field is that of fluid dynamics (the movement of fluids), a highly

complex field that includes the prediction of weather patterns and the processes of star formation.

1. Introduction The force due to air resistance (drag) acting on a body travelling through air is proportional to ρAv2 where ρ is the air density; A is the cross sectional area of the body and v is the velocity through the air. The constant of proportionality is called (or at least is very closely related to) the “drag coefficient”. A special case is a body falling under the influence of gravity so that the downwards force acting upon it is constant (mg). Starting from rest and given sufficient time the downwards force and the drag reach equilibrium when the body is falling at its so called “terminal velocity”.

2. Experimental By a combination of experiment(s) and analysis discover as much as you can about the air resistance of the system in the four hour laboratory session.

Notes: By dropping multiple cases together the mass can be increased without changing the

cross sectional area. Take the density of air () to have a value of exactly 1.2 kg.m-3. 75 muffin cases have a mass of 42 g (with an error of +/- 1 g). Compared to normal teaching lab diaries, your notes will need to contain more

procedural information (since no instructions are available to refer to). Demonstrators are available to bounce ideas off – not for telling you how to go about

your investigation.

110

Experiment 21: Computer Error Simulations and Analysis

Outline The autumn semester introduced random errors (from repeated measurement and from straight line graphs) and the propagation of errors (through techniques of partial differentials and adding in “quadrature”). Having used these concepts for a while, this session revisits the underlying concepts using new and existing Python computing skills.

Experimental (and computing) skills Understanding the statistical analysis of data. Use of statistical computing tools.

Wider Applications This experiment illustrates the unseen statistics behind all practical physics In advanced applications the statistical analysis of data is all handled by computers. This section explores the nature of least squares fitting and provides an introduction to

alternative numerical approaches.

1. Introduction The experiment “Statistics of experimental data (Gaussian Distribution)” performed during the autumn semester (PX1123) introduced you to some of the underlying foundations of the analysis of random errors. Here the subject is revisited. But, by making use of a computer (and Python programming), to both generate and analyse data much faster progress can be made. After reconsidering the error associated with repeated measurements of a single point, the session moves on to consider the treatment of error propagation (the combination of errors) and the “least squares” analysis of straight line data.

Session 1. Evolution of errors with repeated measurement with a normal distribution.

2. Error propagation (making sense of adding in quadrature)

3. The statistics of straight line graphs

Quick Reminder: the nature of experimental measurements (see section III.2 of PX1123 lab manual for full treatments) Repeated measurements usually result in a normal distribution around a mean value.

With a reasonably large number of repeats “standard errors” represent the uncertainty

in determined values.

For y(x) when x is varied the data points can be considered as very similar to repeats

with the points distributed above and below the “best fit line”.

2. Experiments It will be a good idea to have access to the website during the course of the session. This should be one of your “favourites” but if it is not:

https://alexandria.astro.cf.ac.uk/Joomla-python/ Quick Python reminder – relevant syntax is present in week 2 and 3 (Arrays, Vector Algebra and Graph Plotting) of the taught computing course.

2.1. Normal/Gaussian statistics of repeated measurements Section 2.1 will be based on the simulation of repeated measurements of two timed events, A and B both measured with a stopwatch.

111

Suppose that: For the sake of the simulations the true values of A and B are 2.0 s and 3.0 s exactly.

The standard deviation* that characterises both measurements is 0.2 s.

*The standard deviation parameterises the spread in values that are obtained and so is also said to characterise (parameterise) the precision of the measurement.

2.1.1 Distributions for A and B The first step is to create arrays of points for A and B randomly generated from ideal normal distributions. The first point in each array then corresponds to the first measurement etc. Provided these arrays are only created once the subsequent analysis can be cross compared. To achieve this arrays for A and B will be created in the Spyder console. This does not exclude creating programmes in the editor because they can (and are normally) executed in the console and so can call on arrays that exist there.

Creating arrays This will be done using the normal() function. As given in the object explorer the defaults for this are:

normal(loc=0,scale = 1.0,size =1 value)

where loc is the mean value of the distribution, scale is the standard deviation and size is the number of points.

Do the following: Create n = 1000 point arrays for A and B (labelled as A and B)

Create and print out a single (20 bin is appropriate) histogram including both A and B

and comment on the range of values for each and any overlap between the

distributions.

Perform a statistical analysis of A to find the mean, standard deviation and standard

error.

Transfer these to the editor and save the code as a (very) simple programme – it is

worth it as it will be used a few times today. Since this runs in the “Console” it can

call on the A array generated earlier. Do not write a function to generate A in the

programme as this will overwrite it.

Change the array name in the programme to analyse the B array.

Consider the appropriate parameter to use as the errors in A and B, state their values

(with errors – as usual) and state whether they agree with the accepted/known values

of A and B.

2.1.2 Error propagation (adding in quadrature) Students have been required to combine errors based on the outcomes of partial differentiation (which hopefully makes sense) and addition in quadrature (which hasn’t yet been justified). The aim here is to justify the addition in quadrature. The addition and multiplication of two values (A and B) will be considered and their errors will be taken to be their standard deviations. (A large number of points (n) will be used so standard errors are more appropriate however since the two are linked by a factor of (n-1)0.5 this will not affect the interpretation or error propagation).

112

Addition of A and B (Sum, S=A+B) Reminder: error propagation for P = A + B

Partial differentiation gives ��

��= 1 and

��

��= 1

Or �� = �� and �� = �� Combining the �� (or ΔP) contributions in quadrature gives the familiar

(∆�)� = (∆�)� + (∆�)�

Here a distribution of n (=1000) measurements of S = A + B will be generated, i.e. the first value of S is the first measurement of A is added to the first of B and generally for the ith term Si = Ai +Bi. In this way some errors/deviations from the true value will reinforce positively or negatively and some will tend to cancel. This is as would be expected in a real experiment.

Add the arrays A and B together to create the S array.

Plot a histogram and perform a statistical analysis of S to find its mean and standard

deviation.

Compare the mean of S with the expected value and its standard deviation with the

error in S calculated (in the usual way) using the standard deviations in A and B as

their errors.

Multiplication of A and B (product, P = AB) Reminder: error propagation for P = AB

Partial differentiation gives ��

��= � and

��

��= �

Or �� = ��� and �� = ��� Combining the �� (or ΔP) contributions in quadrature

(∆�)� = (�∆�)� + (�∆�)� Dividing by P2 = (AB)2 gives the familiar

�∆�

��

= �∆�

��

+ �∆�

��

Here a distribution of n (=1000) measurements of P = AB will be generated, i.e. the first value of P is the first measurement of A is multiplied with the first of B and generally for the ith term Si = Ai.Bi. Again, some errors/deviations from the true value will reinforce positively or negatively and some will tend to cancel. Use the same arrays for A and B as before.

Multiply the A and B arrays together to produce P.

Plot a histogram and perform a statistical analysis of P to find its mean and standard

deviation.

Compare the mean of S with the expected value and its standard deviation with the

error in S calculated in the usual way.

2.1.3 Evolution of mean standard deviation and standard error The aim here is to illustrate the difference between standard deviation and standard error and their suitability in representing the random error in measurements.

The A array of 1000 points generated at the start of this section will again be used and should not be overwritten. The approach will mimic an experiment in which the number

113

of measurements is gradually increased and the mean, standard deviation and standard error evolve.

The Python programme written earlier needs to be modified to perform the analysis in this section. To do this elegantly requires the use of “For loops” which is scheduled for week 7 (but subject to change). Depending on proficiency (and perhaps confidence) students may use loops (a) or stick to a simpler sampling strategy (b).

For both strategies it will be necessary to sample (or return) parts of the array A, a sequence that always starts with the first value. This skill was addressed in week 3 of the computing course. Start by testing that you can sample the array correctly.

(a) Simple sampling strategy Transfer the code to sample the array to your existing programme and test that it

performs correctly (eg by examining the mean of a small number of points).

Next run the program to analyse the first 5, 10, 20, 50, 100, 200, 500, 1000 points.

Plot a graph of (mean value – 2), +/- standard deviation and standard error on the y –

axis and number of samples (measurements) on the x-axis. (+/- are plotted here to

represent possible error ranges).

Consider and describe the evolution with number of measurements.

(b) Advanced strategy (using For loops) By using a For loop it is possible to sample and analyse each measurement from 2 to

1000 points and see the evolution in much finer detail.

However, do not attempt this approach unless you are proficient in the use of loops.

Consider and describe the evolution with number of measurements.

2.2 Straight line graphs Laboratory and computing courses have introduced the analytical method of finding the “least squares” best fit (and associated errors) to straight line (linear) data. Although this has been used it has not yet been examined in detail. To do this the “Hooke’s law data”, given in Table 1, used in the computing module will be used as an example data set.

Mass (x_data)/kg Length (y_data)/m 0 0.055

0.1 0.074 0.2 0.089 0.4 0.124 0.5 0.135 0.6 0.181 0.8 0.193

Table 1: Hooke’s Law data taken from the computing course

Least squares analysis leads to gradient = 0.18+/-0.01 m/kg and y intercept = 0.055 +/- 0.006 m, so that the best estimate of the straight line representing the data is y = 0.18x +0.005.

114

Reminder of the “least squares” approach. The errors in x points are insignificant – this means that the deviation of a point from

the fit line can be taken to be solely associated with the y values. Consequently the

statistics describing this situation are essentially the same as those describing repeated

measurements of a single point.

The (random) errors characterising the y data points are all the same (and can be

described by a standard deviation) – this means that all points have equal importance

or “weight”.

The best fit line must pass through the mean of the x and y data values (x_mean and

y_mean respectively).

Since the errors in x points are insignificant the difference between the best fit line

and the data points is characterised by the difference between the corresponding y

values, known as “residuals”. The values of m and c when the square of the residuals

is minimised is the best fit line.

Note: the least squares method of obtaining best fits is not limited to straight line data although it is then more difficult or impossible to find analytical expressions and it is often necessary to resort to numerical techniques (through use of a computer).

The approach for investigating least squares fitting of straight line graphs A set of straight lines all passing through the mean of the x and y data values but having different gradients (including the best fit gradient) will be generated. The square of the residuals will be calculated for each line and plotted against gradient.

Guided be the known best fit we’ll consider the quality of fits for gradients of m = 0.18 +/- 0.05 m/kg, i.e. in the range 0.13 to 0.23 m/kg in 0.01 m/kg steps.

Do the following In the Spyder console: Generate arrays of x and y data points, call these x_data and y_data.

Find the mean of the measured x and y points.

For m = 0.18 m/kg (we’ll start with the best fit gradient) calculate an array of points

for the corresponding straight line based on the x_data points.

Generate an array of the difference between the y data points and the y best line

points. These values are the residuals.

Square the residuals and find their sum and record this in a table in your diary.

Transfer the working code to the editor to create and save a little programme.

Repeat* the calculation for all the required gradients.

Plot a graph of sums of the squares of residuals versus gradient.

Describe its form.

* This could also be done using a loop.

115

III: BACKGROUND NOTES III.1: Experimental Notes:

INTRODUCTION TO ELECTRONICS EXPERIMENTS

In these experiments you will be required to build a variety of analogue electrical circuits and to make measurements of potential differences, current flows etc. The following notes give advice on building circuits and how to use test equipment, such as oscilloscopes, multimeters and signal generators. The final section gives advice on eliminating faults in electrical circuits.

1. Building Circuits

1.1. Breadboards/Prototype boards (or “solderless breadboard” or “plugboard”) Introduction

Breadboards are used widely used for making temporary prototypes of electrical or

electronic circuits and for experimenting with circuit design.

Individual components whether resistors, capacitors or integrated circuits can be

plugged directly onto the board and appropriately connected using “jump leads”.

The facility to easily handle components, build simple (through to potentially very

complex) circuits and to re-use everything is why we use them in the teaching

laboratories.

Construction and use

Figure 1. Close up of a bread board as used in year one laboratory. The resistor and capacitor shown are connected in

series and jump leads link to binding posts to which coaxial or other leads can be connected. Overview

A breadboard consists of a perforated plastic block each perforation having a metal-

alloy spring clip behind it: the clips are known as tie points or contact points.

116

Sets of clips are electrically joined in rows or columns and the board essentially

consists of numerous unconnected sets that can be used (or not) as required.

The sets of contact points come in two separated types: “bus” and “terminal” (see

below) and an area of the same type is called a “strip”.

Terminal strip (where most of the devices are connected)

This area has connected rows all 5 clips in length and a notch parallel to its long edge.

The notch marks the centreline of the strip and is designed for integrated circuits to

straddle it and be allowed limited air flow for cooling.

Note: the spacing between clips and across the notch is specifically designed to accept integrated circuits.

Bus strip (used to power devices)

This area usually has two columns.

The entire length of a column may be connected or there may be a break halfway

along.

Breaks can be a useful feature but if not used it is often a good idea to link across the

gap at the start of the build.

Bus strip typically run down the sides of terminal strips.

Larger boards (such as ours)

May be mounted on a sheet of metal.

Often consist of a number of strips clipped together.

Typically include a number of binding posts that provide a clean way to connect an

external power supply.

Using breadboards (in the UG laboratories).

The clips are designed to hold the legs (terminals) of electrical/electronic devices

(e.g. resistors or integrated circuits) or wires

The legs of devices should not be shortened.

The (jump) wires may be bought specially terminated or may be cut from a reel and

stripped of a suitable length of insulation: stripped wires should be 22AWG (0.33

mm2) and not multi-strand wires.

When connecting jump wires to binding posts take care not to pinch down on

insulation: this is a common cause of poor connections. This is easily achieved by

ensuring that some bare jump wire visibly protrudes from the post.

When connecting coaxial leads to the binding posts ensure that the earth goes to

the same post.

Spring clips are rated for devices operating at 1 A/5 V or 0.33 A/15 V (i.e. 5 W

dissipated in the device).

Note: each clip takes one leg or wire only.

117

Care needs to be taken when testing circuits with a voltmeter or oscilloscope: it is

safest to insert a suitable terminated wire into a spring clip. When a (necessary

sharp) probe is used there is a danger that the spring clips may be damaged.

Limitations of breadboards

The relatively high and not very reproducible contact resistances (the resistance

between the spring clips and the wire pushed into them) can be a problem for d.c. of

low frequency circuits.

The current and voltage ratings provide limitations.

Stray capacitance and inductance combined with contact resistances limit high

frequency operation to ~10 MHz.

2. The Oscilloscope An oscilloscope is a common and important test equipment that allows voltages varying in time to be visualised a on a 2D plot. Usually one (or two) signal voltages, on the vertical axis, are plotted as a function of time, on the horizontal axis.

The basic functions of the oscilloscope are shown in Figure 2. Most of the functions are self explanatory. Function Keys: Accesses the function alongside the button shown on the LCD display Variable Knob: Increases or decreases a value and moves to the next or previous parameter CH1/CH2/Math: Configures the vertical scale and coupling for each channel input (CH1 and CH2), and also Math operations such as ‘add’, ‘subtract’, or perform ‘Fast Fourier Transforms (FFT)’ on input waveforms Volts/Div: Sets the y axis scale Time/Div Knob: Sets the timebase (x-axis scale) Autoset Key: Automatically configures the horizontal, vertical, and trigger settings according to the input signal. Trigger Level Knob: Sets the trigger level. This controls the scope's ability to reproduce a steady trace on the screen. GW Instek GDS-1022 oscilloscope

Introduction The GW Instek is a digital oscilloscope with the advantage, over older analogue scopes, that the instrument can perform mathematical manipulations, i.e. it will do some data analysis for you.

118

Figure 2. Front panel of the GW Instek oscilloscope. Particularly important features initially are shown with circles

This is meant to be a reference guide. For a step by step guide see the first year experiment “Intro to oscilloscopes and multi-meters”.

2 Finding your way around the LCD display Traces (waveforms) are visible on a grid that is spilt into small (10x8) squares. Information is colour coded: yellow and blue correspond to channels 1 and 2 respectively. There is a lot of information around the periphery of the display that changes depending on use

To the left of the trace:

A number (the channel number) and an arrow (►) showing the 0 V position.

The position of 0 V (and so the whole trace) can be altered up/down by rotating the “vertical” channel 1. When this is done the position of 0V on the screen (versus the central horizontal axis) is displayed in the bottom right hand corner of the display.

To the right:

The broad blue column contains a changeable “menu” of choices and measurements.

On the trace is an arrow (◄) indicating the “triggering voltage”.

Underneath the trace:

On the LHS the scales for the two channels are given. These numbers give the voltage corresponding to the vertical side of a (~1 cm) square (i.e. volts per division).

Note that the active channel is denoted by a number on a coloured background whereas the inactive channel is a number on a dark background

Next along is the “horizontal status”: “M” is for Main mode (of the scope) and the time corresponds to a ~1cm square (or time per division).

On the right, both with a green background, are a “T” (for Trigger) followed by the triggering conditions: in this case an edge (i.e. a changing signal) on the channel 1 waveform, in fact here a rising edge (i.e. the signal must be increasing with time). Below this is an “f” (for frequency) followed by a value.

Above the trace:

The “▼” symbol above the centre of the screen and the symbols above it relate to the horizontal position of the waveform. Altered by rotating the “horizontal” knob.

To the right in green “Trig’d ●” indicates that a signal is being triggered (on taking the trigger voltage out of the range of the signal this changes to “auto ●/o” meaning that

119

the screen is updated regardless of the trigger conditions). Alter with “Trigger_level” knob.

3 About triggering

Triggering is central to the operation of oscilloscopes but has many variants and so will be only introduced here. See section 7 for more information.

In operation the oscilloscope displays one trace and then almost immediately replaces it with another. The reason that the display appears stable on the screen is that each trace is made to start in the same place, i.e. it is “triggered” under the same conditions. In fact, with this digital scope the trigger point is in the centre of the x axis on the screen.

If the trigger level (voltage) goes out of the range of the oscillating signal the system cannot trigger and the screen simply updates randomly leading to an unstable trace.

4 Setting up single or two channel y-t traces (using “Default Setup” and “Autoset”) A default setup is a useful configuration to start from or return to. To get to it:

Turn on the oscilloscope and when the GW Instek banner has disappeared press “Save/Recall” and then select “Default Setup” (Channel 1 and 2 are then positioned at the centre of the top and bottom halves of the screen respectively).

If only channel 1 is required cancel channel 2 by pressing its button twice (once to select, 2nd time to turn it off). (Note: channel 1 (2) button is yellow (blue) and this colour code is also used on the LCD display.)

Pressing “Autoset”: with a reasonable signal level (>30 mV) and frequency (>20 Hz) the scope will choose suitable signal (y axis) and time (x axis) ranges and triggering conditions.

5 Making simple measurements with the oscilloscope Oscilloscopes aren’t precise measurement instruments (but good enough in years 1, 2).

Using “Cursor” (to do some of the work)

Press “Cursor” and two vertical lines appear in the screen that are used to give 2 horizontal positions (X1 and X2).

Alternatively, pressing the X↔Y function key accesses 2 vertical positions (Y1 and Y2).

The position of the cursors is controlled by the “Variable” knob (at top left).

With the function key X1X2 (Y1Y2) selected the separation of the cursors is fixed.

To control them individually select X1 (Y1) or X2 (Y2).

Note: if the cursors positioned time t apart the scope calculates a frequency (f = 1/t), but this only makes sense (as a rough measure) if t corresponds to one period, T.

Using “Measure” (to do all of the work)

Press the “Measure” menu key and the peak to peak voltages (Vpp), the average voltage (Vavg)), the frequency (f) directly in the blue column (and stuff that will be ignored here).

6 Setting up an X-Y display Instead of plotting against time on the x-axis, here the channel 1 input controls the X-axis, whilst channel 2 controls the Y-axis. To set this up:

With both channels active press the Horizontal “Menu” key”.

Press XY.

120

7. Additional Notes on Timebase trigger For the analysis of time varying voltages the trace on the oscilloscope screen must be stationary. If the timebase were "free-running", that is, not synchronised to some multiple of the repeat-time or period of the input waveform then the trace on the screen would not be stable. To synchronise the timebase to the repeat time or period of the input waveform a "trigger" is used. The trigger circuit in the oscilloscope effectively 'fires' or emits a pulse when the input voltage passes a set threshold level. This pulse is then used to initiate the timebase cycle. The input to the trigger circuitry is normally taken from the y axis input amplifier. Sometimes it is found necessary to apply an alternative, externally-derived voltage direct to the trigger circuit via the external trigger input. The trigger is sensitive to both slope and polarity of the input waveform and can be set to fire on a particular slope and on positive or negative polarity. Hence, if a periodic waveform such as a sinusoid is applied to the input terminals, the trigger can be set to fire once every cycle at a fixed point in the cycle (Figure 3). The timebase cycle shown would lead to a stationary trace representing one cycle of the input waveform. The trigger level is shown on the display on the RHS of the axis (small arrow marker). This is the trigger threshold voltage shown in figure 3.

Figure 3: Understanding the timebase

121

Notes on the AC and DC components of the oscilloscope waveform.

A general time-varying voltage such as that shown in Figure 4(a) may be divided into two components: (i) a D.C. component, equal in magnitude to the mean value (ie, the average over all

time) of the waveform (Figure 4(b)) and (ii) an A.C. component which remains when the D.C. component has been removed

from the waveform (Figure 4(c)). The oscilloscope amplifiers may be D.C. or A.C. coupled. Try this on the waveform you are observing. When the coupling is set to D.C. the trace represents both the D.C. and A.C. components as shown in Figure 4(a). Setting the coupling to A.C. removes the D.C. component just leaving the A.C. component as in Figure 4(c).

Figure 4(b)

4(c)

122

3. The Multimeter The multimeter you will encounter in your first year experiments (and many subsequest) is a hand held digital device shown in figure 5. It is capable of measuring direct and alternating voltages and currents, resistance, and diode readout. You must select the mode of operation on a central switch, apply your terminals correctly and select the appropriate measuring range.

Figure 5: The Multimeter

4. The Signal Generator The output from the oscillator is available from the bottom right BNC socket. The signal amplitude can be varied by means of the attenuator (O dB or -20 dB) and the variable output level. Three different waveforms are available: sine, triangular and square. The OFFSET knob works only when the DC OFFSET button is depressed. 5. Resistance Colour Codes Resistors are colour-coded to indicate their resistance, tolerance and power-handling capacity. The background colour indicates the maximum power of the device. You will use only 0.5 W resistors (dark red background). The four coloured bands can be read as described below to determine the resistance and tolerance. The final gold or silver band gives the tolerance as follows: gold ± 5% silver ± 10%

Terminals

Rotary Switch

Range Button

Display

123

Digit Colour Multiplier No. of zeros

silver 0.01 -2

gold 0.1 -1

0 black 1 0

1 brown 10 1

2 red 100 2

3 orange 1 k 3

4 yellow 10 k 4

5 green 100 k 5

6 blue 1 M 6

7 violet 10 M 7

8 grey

9 white

Table 1.1: Resistor colour-codes

Example: red-yellow-orange-gold is a 24 k, 5% resistor. 6. Finding Faults in Electronic Circuits During the course of the laboratory work you will probably encounter practical difficulties. You should always try to solve these problems yourself, but if you are unable then you should call on the assistance of the demonstrator. Occasionally, a circuit will fail to operate because of a faulty component, but more often than not problems arise from the incorrect use of test equipment, the omission of power supplies from circuits, or the use of broken test leads. Faults are not usually apparent to the naked eye, but they may be detected quite easily by following a systematic checking procedure such as that outlined below. If after following these procedures your circuit still doesn't work, then DO NOT HESITATE TO ASK THE DEMONSTRATOR FOR HELP. (i) Ensure that you understand how to use each piece of test equipment. If in doubt,

consult the demonstrator. (ii) Examine the circuit for any obvious faults. Is the circuit identical to the circuit

diagram in the script? Are the components the correct values? Are there any loose wires or connectors which could short out part of the circuit?

(iii) The fault may lie in the circuit itself, in the signal generator which supplies the input

signal, or in the measuring equipment. Switch on the power supply to the circuit and apply the input signal. Use both channels of a double-beam scope to measure simultaneously the input and output signals of the circuit. Check at this stage to see whether the scope leads are faulty. Ensuring that you do not earth any signals (see

124

next section), connect the scope to the input and output of the test circuit. If there is no input signal, disconnect the signal generator and test it on its own. If the generator functions only when disconnected from the circuit, it implies that the fault lies in the circuit and that it is possibly some type of short circuit, most likely associated with incorrect earthing. If there is an input signal but no output signal, the fault lies in the circuit.

(iv) A common fault which occurs when using more than one piece of mains-powered

equipment is the incorrect connection of earth lines. ALL EARTHS MUST BE CONNECTED TO A COMMON POINT, otherwise the signal may be shorted out.

(v) If you have established that the fault lies in the circuitry, use your scope to examine

the passage of the signal through the circuit. Components which you regard as faulty should be isolated or removed from the circuit for further testing.

(vi) If you trace a fault to a piece of mains-powered equipment, DO NOT ATTEMPT TO

REPAIR THE FAULT YOURSELF. Report the fault to the demonstrator or technician and ask for replacement equipment.

HOW TO USE A VERNIER SCALE

Vernier scales are used on many measuring instruments including the travelling microscope that we will use in the laboratory. We will begin by looking at the general principle of a vernier scale and then look at the particular scale we will use. Figure 5 shows a vernier scale reading zero. Note that the 10 divisions of the vernier have the same length as 9 divisions of the main scale. If the smallest division on the main scale is 1mm then the smallest scale on the vernier must be 0.9mm. This vernier would then have a precision of 0.1mm and results should be quoted to ±0.1mm.

Figure 5: Vernier Scale

Let us see how it works. Examine figure 6. The position of the zero on the vernier scale gives us the reading. Here it is just beyond 2mm so the first part of the reading is 2mm. The second part (to the nearest 0.1mm) is read off at the first point at which the lines on the main scale and the vernier coincide. Here it is the 4th mark on the vernier (don’t count the zero mark). The reading is therefore 2.4 mm.

10 0 Main scale

Vernier 0

125

Figure 6: using the vernier

To see why examine figure 7, which is an alternative version of figure 6.

Figure 7: why a vernier works

In essence we have been finding the distance X, which is simply given by: X = D1 – D2 = 4×1mm - 4×0.9mm = 4 ×0.1mm = 0.4mm So that is the general principle. Let us see how the travelling microscope scale works. In this case the smallest division on the main scale is 1mm, which implies that the smallest division on the vernier is 49/50 mm = 0.02 mm As an example the reading in figure 1.8 is 113.68mm.

0

10 0

x

0

1 0

D1

D2

126

Figure 8: example reading = 113.68mm. Note: unlike the examples in figures 5-7 the vernier is above the main scale.

Best Match

127

III.2 ANALYSIS OF EXPERIMENTAL DATA: ERRORS IN MEASUREMENT

Contents

1. Introduction 1.1 Important concepts of measurements and their associated “errors” 1.2. The importance of estimating errors (with examples)

2. The nature of errors (a discussion in terms of single measurements) 2.1. Classes of errors 2.2 Illegitimate errors 2.2.1 Mistakes in calculations 2.2.2 Mistakes in measurement 2.3 Systematic errors 2.4 Random errors 2.5 The interplay between systematic and random errors 2.6 A note on experimental skill and personal judgement

3. Presentation of measured values 3.1 Accuracy and precision 3.2 Significant figures

3.2.1 How many significant figures should be used for a value? 3.3 The acceptable ways of presenting measured values

3.3.1 Required format for undergraduates 3.3.2 Alternative forms that may be met

4. Calculating with measured parameters and combining errors 4.1 Error propagation: the general case 4.2 Commonly occurring special cases 4.3 Notes on performing error calculations

5. Multiple measurements (of a single parameter) 5.1 Introduction 5.2 Importance of repeat or multiple measurements (of a single value) 5.3 Introduction to statistics (distributions, populations and samples)

5.3.1 Distributions 5.3.2 Line-shapes 5.3.3 Terminology: “Populations”, “samples” and real experiments 5.3.4 Experimental information found from a distribution 5.3.5 Extraction of information as a function of sample size

5.4 The statistics of distributions 5.4.1 Mean 5.4.1 Variance (mean square deviation) and standard deviation 5.4.2 Standard error

5.5 Summary - what to use as the random error as a function of n

6. Multiple measurements: straight line graphs 6.1 Introduction 6.2 Presenting experimental data on graphs 6.3 Finding the Slope and Intercept (and their errors) 6.3.1 The two approaches

128

6.3.2 Finding gradient, intercept and their errors by hand 6.3.3 Finding gradient, intercept and their errors by computation

6.4 Error bars (and outliers) 6.4.1 When to use error bars 6.4.2 Outliers 6.4.3 Dealing with a small numbers of data points 6.5 Forcing lines to be straight

7. Some experimental considerations 7.1 Terminology 7.2 Comparing results with accepted values 7.3 y = mx relationships

8. Some important distributions 8.1 Binomial statistics 8.2. The normal (or Gaussian ) distribution 8.3 Poisson distribution 8.4 Lorentzian distribution

Additional reading These notes are intended to be just a brief guide to errors in measurement. For further details the following books are recommended:

G.L. Squires "Practical Physics" 3rd ed Cambridge University Press (1985) N.C.Barford "Experimental Measurements: Precision, Error and Truth" 2nd ed J.Wiley (1985) P.R. Bevington "Data Reduction and Error Analysis for the Physical Sciences" McGraw-Hill (1969)

“Squires” is a very good, very accessible book that is available in the library. It has a strong emphasis on the relationship to experiment, was referred to extensively when re-writing these notes and is highly recommended.

129

1. Introduction

This document is intended as a reference guide for undergraduates in all years of physics degrees in Cardiff University. Most of the concepts covered in this document are covered in 1st year courses and may be considered an essential basis for any experimentalist. There are many more sophisticated and specialist approaches that may be met during an undergraduate degree course that are beyond the scope of this document.

As the title of this indicates this document is concerned with a particular aspect of the analysis of experimental data. A good start is therefore to consider what is meant by analysis:

“Analysis” generally is the detailed examination of “something” (in this case data). It is performed by a process of breaking up “something” that is initially complex into smaller parts to gain a better understanding of it.

(Data) analysis is therefore a type of problem that needs to be solved. With any type of problem often the most difficult part is finding a way to start addressing it. One place to start is by considering “errors”. But before that, some terminology.

1.1 Important concepts of measurements and their associated “errors”

The “true value” (of the physical quantity being measured) is as its name suggests. Determining the best estimate of the “true value” of something is usually an important aim of physics experiments.

The above statement causes a problem. It is not usually* possible to be certain of “true values”, experiments can only ever provide “measured values” and discrepancies are expected.

The word “Error” in scientific terminology is usually quoted as meaning "deviation from true value" or "uncertainty in true value" it is not the same as "mistake"

Consequently it is the “measured values” or the “best estimate of the true value” that must be expressed along with their associated errors. Undergraduates in this School are asked to do this using the form**:

(measured value +/- error) units [1]

The measured value and its error clearly define an interval (from value - error to value + error). The situation isn’t entirely straightforward so for now all that will be claimed is that the experiment suggests that the “true value” lies within this interval.

This document is mainly concerned with methods of deciding upon reasonable/realistic estimates for the error. It will reveal the underlying importance of statistics and explain a method of combining errors whilst avoiding becoming a course in mathematics.

Although there will be some discussion of how errors arise in different experimental circumstances and their importance in extracting meaning from experiments these are not of primary concern. However, whilst ignoring specifics, it should be recognised that to improve understanding (our ultimate aim) it is often necessary to obtain “better” measurements with smaller errors achieved through use of better instruments and/or experimental technique.

* It would be wrong to say that there aren’t cases where exact true values can be found, for example:

How many electrons are allowed to exist in a particular atomic orbital? How many legs does a bird have?

** There is more on this and some alternative forms in usage later.

130

1.2. The importance of estimating errors

In order to get any meaning from measurements it is essential that the value obtained is quoted with a reasonable estimate of its error. Put the other way around, measurements without errors are meaningless.

Since the determination of errors is a time consuming process and the bane of students’ experimental lives this requires some justification.

Example: Suppose a student measures the resistance of a coil of wire and writes down:

"The resistance of the coil of wire was 200·025 at 10oC and 200·034 at 20 oC, so the resistance increases with temperature".

Without more information, the student's statement is not justified. We must know the errors in the measurements to say if the difference between the two figures is significant or not. If the error is ± 0·001 , i.e. each value might be up to 0·001 higher or lower than the stated value, then the difference between the two resistances is significant. But if the error is ± 0·01 the two values agree within errors and the difference is not significant.

Example: Two students perform an identical experiment to determine the acceleration due to gravity, g (on the Earth’s surface this has a value of 9.80+/-0.02 m/s2 - note that the error in g here arises from the variation in its value over the Earth’s surface).

The first student returns g = 11+/-2 m/s2 and the second student g = (10.2 +/- 0.3) m/s2.

What can be said about these results? Without considering errors, all that can be said is that the results from the second

student “appear” better than from the first. With errors only the first students result agrees with the known value. But then again, the smaller error quoted by the second students implies that this data

set is “better” in some sense (possibly resulting from more careful or skilful experimentation) and hints that there may be an underlying problem with the equipment or with the way the experiment was carried out.

Clearly there are problems with both data sets and it is not possible to get to the bottom of this just by looking at the numbers. However, errors are necessary in order to start to get an understanding of what is happening.

The next step in this case would be to go back to the original data to see if there were problems with the analysis carried out. If the analysis was reasonable in both cases it may well be that the second student has unearthed an issue with the experiment.

It would be highly unlikely in this case that some new physics has been unearthed but with a different experiment this is one way that science works. 2. The nature of errors (a discussion in terms of single measurements) Initially restricting discussion to single measurements of a physical parameter allows a sensible progression through the subject. However, almost all of what is included here applies equally to the more complicated cases with multiple measurements.

2.1 Classes of Error The term "error" represents a finite uncertainty in a measurement due to intrinsic experimental limitations. These limitations can arise from a number of causes, here they will be considered as being of two distinct classes. These are:

131

Systematic errors - these are the result of a defect either in the apparatus or experimental procedure leading to a (usually) constant error throughout a set of readings. This type of error can be difficult to track down. One test is to perform measurements of well known value, if there is discrepancy there may well be a significant systematic error present.

Random errors - these are the result of a lack of consistency in either the apparatus or experimental procedure leading to a distribution of results (if/when they are repeated) that is equally positive and negative. This is the type of error usually responsible for the spread of results when measurements are repeated.

In addition to the above, another type of error needs to be mentioned. It is different because its errors are not intrinsic to the experiment and so is often ignored when errors are discussed..

Illegitimate errors (or mistakes) - these are the result of mistakes in computation or measurement. This class of error is worthy of consideration because mistakes happen and have to be dealt with ethically and with scientific integrity. Such errors are usually (but not always) easily identified as obviously incorrect data points or values far from expected.

Good results are only obtained by eliminating illegitimate errors and minimising both systematic and random errors.

The rest of section 2 discussed these classes of errors in turn and in more detail.

2.2 Illegitimate errors (mistakes) Reminder: this class is usually ignored since definitions of scientific errors excludes it. One way of viewing this is that science works on the implicit assumption that every effort has been made to eradicate all mistakes from experimental results before they are presented. Scientists being human, mistakes will get through (some are really difficult to identify) but published work is open to being checked by others.

At this point it is a good idea to distinguish between mistakes in calculations and measurement.

2.2.1 Mistakes in calculations These are simple to deal with (when identified) as there is no judgement involved, either a mistake has been made or it hasn’t. Students are generally poor at going back to their original data and checking calculations even when faced with values that are out by orders of magnitude. You will make mistakes with calculations and you will need to go back over your numbers to figure out where. Hint, if you are out by factors of ~10, 100, 1000 etc the place to start is any conversion between units (e.g. millimetres to metres).

Example: Subtle calculation errors can arise through the number of significant figures used in performing a calculation. In some contexts you might be fully aware - in “back of the envelope” calculations rounding approximations such as g = 10 m/s2 or e = 10-19 C might be made in order to facilitate quick combination of values and this is fine when order of magnitude results are adequate. However, when accurate values are required, premature rounding can introduce illegitimate errors.

2.2.2 Mistakes in measurement These are far more contentious as there is a danger of consciously or sub-consciously manipulating results possibly to fit certain pre-conceived expectations. This is scientific fraud. But, it is also true that mistakes can be made - with a subsequent need to ignore otherwise misleading results.

132

So how is this handled with scientific integrity? The general principal is to not let yourself get into the situation where you might be tempted to fiddle results.

Example. After data collection it may become apparent that an individual data point lies far removed from all the others.

Partly based on how far out this point lies a decision may then be made to ignore this data point in further analysis. However, in the analysis it should be made clear that such a decision has been made and why (if it isn’t clear), the point should be labelled as an “outlier”. This process allows re-analysis with inclusion of the outlier - such a process may be performed in any case in order to see its effect.

Example. During a measurement it may be suspected that a mistake has been made, for example in counting the number of swings of a pendulum, in starting/stopping a timer or in the settings applied to an instrument. If it is known, or suspected at the time of performing the measurement, that an error was made then the data point or set of points can be safely discarded. However, if the measurement only becomes suspect as a result of the values obtained then it is not valid to discard them out of hand, they then fall into the category of “outliers”.

In both of the above examples the issue is best resolved by performing repeat measurements (not often possible in years 0 and 1 but required from year 2 onwards).

There will very little further consideration of illegitimate errors in this document.

2.3 Systematic errors Systematic errors can arise in an experiment in a number of ways. For example : Zero error: from use of a ruler that is worn at the end, or a voltmeter may read a non-zero value even when no voltage is applied across its terminals. Calibration error: an incorrectly marked ruler can produce a systematic error which may vary along its length. Wooden rulers are good to about 1/2mm in 1 metre. Even expensive steel standards must be used at correct temperature to avoid a systematic error. Parallax error: this may occur when reading the position of an object or a pointer against a scale (e.g. a ruler) from which it is separated. The reading can depend on the viewing angle.

Timing errors are a common example of systematic errors. Apart from errors introduced by a clock running too slowly there is also the tendency of a human operator (or indeed electronics) to start a clock consistently too soon or too late (which may show up as a zero error).

To achieve good results systematic errors must be carefully considered and reduced so that they become insignificant (in most cases it is impossible to remove them entirely). Two tricks that can be useful here: (i) compare the results to another experiment made using different apparatus and using a different method; (ii) where possible use the equipment to make measurements of known values. In both cases, if there is good agreement there is greater confidence that the systematic error is insignificant and results can be trusted. Comparisons with known values allows for calibration of instruments.

2.4. Random errors These as mentioned arise from fluctuations in observations so that results differ from experiment to experiment. It is easy to see that these will arise when experiments are performed by hand as human factors will mean that way that it is performed is not exactly the same. But in a similar fashion measuring instruments are also prone to variation, for example: both mechanical and electrical instruments will vary with the ambient

133

temperature (and other factors), both analogue and digital instruments suffer from rounding errors, low signal measurements are prone to the effects of noise etc.

The reduction of random errors can be achieved in three ways: improvement of the experiment, refinement of technique and repeating the experiment.

2.5 The interplay between systematic and random errors

Illustrated in figure 1 are the results of a number of measurements of a quantity x (which could be a length, voltage, temperature etc.).

Figure 1 (a) Random errors only, any systematic error is insignificant. (b) Significant random and systematic errors present.

In this figure the position of the true value is marked and each small vertical line marks the result of experimental determinations of x. In figure 1a the results are scattered about the true value with no bias for low or high values, so you would expect the average of all the results to be close to the true value. This is the case where random errors dominate - any systematic errors are negligible. In figure 1b, there is, in addition to random errors, a systematic error which means that average value is shifted to a value smaller than the true value.

From the above it is clear that: Measured values close to the true value are obtained if the systematic error is small A small systematic error will only be revealed when the random error is small.

Less obviously: It is possible to have a small random error even with a large spread of data points - this

is addressed later in the section on multiple measurements. Systematic and random errors are always present. However, systematic errors are

ignored when they are small compared to random errors.

2.6 A note on experimental skill and personal judgement Experimental skill and personal judgement are both important. Students should find this statement both worrying and reassuring at the same time. Worrying because simply following a set of instructions often produces bad results, reassuring because there are rewards for practical ability and training. Bad results can be understood to be the consequence of having significantly larger random and systematic errors. So how can this come about?

Example: The error in a length measured with a rule will be influenced by the fineness of the graduations on the scale, but the position of the scale relative to the object and how the system is viewed are important (for both random and systematic errors) as is the ability to interpolate between graduations (mainly for random errors).

Generally, experimenters should understand the equipment in use, acquire a feel for it and, based on this, subsequently use their judgement. This applies equally to experiments in which the data acquisition is handled by a computer. There is a tendency for students to

x

x

true value

(a)

(b)

134

have a greater trust in results obtained via a computer. This is dangerous and it is better to treat all equipment with the same initial (healthy) mistrust.

3. Presentation of measured values Knowing about classes of errors it is now possible to discuss the presentation of measured values in greater detail, starting with more of the terminology that accompanies it.

3.1 Accuracy and precision

As with “errors” the terms "accuracy" and "precision" have distinct meanings in experimental science. In fact, accuracy is closely linked to both systematic and random errors whilst precision relates only to the random error.

Accuracy - The accuracy of an experiment is determined by how close the measurement is to the true value, in other words how correct the measurement is. From the above sections it should be clear that a value can only be accurate if the systematic error is small, however, even with a small systematic error a measurement will lose accuracy if the random error increases.

Precision - The precision of an experiment is determined by the size of the spread of values obtained in repeated measurements regardless of its accuracy. As illustrated in figure 2 a smaller spread of values corresponds to a more precise measurement. From the above sections, a value can only be highly precise if the random error is small. Precision and random error are essentially equivalent - the random error is often termed the precision of a measurement.

Figure 2 Two groups of measurements of x with different precisions (for a small

systematic error the values are distributed about the true value).

Some examples may serve to illustrate these definitions:

Example: Supposing a steel rod is measured to be 1.2031+/- 0.0001 m in length, i.e. its length has been expressed to the nearest 0.1mm. This measurement implies a precision of 0.1 mm. But suppose that, due to wear at the end of the ruler used to measure the rod, this figure is in error by 1mm. Then, despite the quoted precision, the measurement is inaccurate.

Note: The precision quoted here is more formally known as the “absolute precision”. This is distinct from the “relative precision” which is given in terms of the fraction (or

135

percentage) of the value of the result. In this case the relative precision is 0.0001/1.2031 = 8x10-5 (or 0.008%).

Example: Suppose that the true value of a temperature of an object is 20·3440 oC: a

measurement of 20·3 ± 0.1oC is accurate (it agrees with the true value within errors); a

measurement of 20·33 ± 0.02 oC is both accurate and more precise (and could be claimed

to be “more accurate”); a measurement of 20·322 ± 0.005oC is more precise but now must be stated to be inaccurate because it does not agree with the true value within error.

The terms “accuracy” and “precision” as defined allow results and experiments to be considered more meaningfully. The second example illustrates that as the random error in reduced and precision improves systematic errors, previously hidden, start to emerge. When systematic errors are evident there is little usually little point in improving the precision further - steps should first be taken to reduce systematic errors.

In the rest of this guidance it will be implicitly assumed that systematic errors are negligible compared to random errors. This will allow the discussion to be presented such that when a more precise measurement is made, the accuracy will also be greater. Bear in mind that in real experiments this will not always be true.

3.2 Significant figures

In the previous section it was seen that as the precision of the experiment improved the number of significant figures (s.f.s), used to quote the result, increased. By contrast, by their nature errors are estimates (i.e. imprecisely known) and so can only be quoted to 1 or 2 s.f.s. This can be a little confusing at first and, perhaps not surprisingly, a common mistake that students make is to use an incorrect number of significant figures. This section uses two examples in an attempt to clarify the situation - ultimately it is simply common sense.

3.2.1 The use of significant figures

Example: A measurement of distance can be correctly quoted as (4.85 ± 0.02) mm or (0.485 ± 0.002) cm or (0.00485 ± 0.00002) m. These values are equivalent, all we’ve done is change the units: The significant figures (s.f.s) are 4,8 and 5 hence in his case all measured values are

quoted to 3 s.f.s. The largest figure (4 in the above example) is the most significant figure and the

smallest number (5 here) is the least significant figure. The position of the decimal point therefore has no bearing on the number of s.f.’s. The error here is quoted to one s.f.. The number of significant figures used for the measured value is determined by the least

s.f. in the error. This is also the (fixed in this example) precision of the measurement.

Example: To illustrate this further take the temperatures given in an example in section 1

- (20·3 ± 0.1) oC, (20·33 ± 0.02) oC, (20·322 ± 0.005) oC. These measured values are quoted to 3,4 and 5 significant figures (s.f.) respectively, this contrasts with their errors (here) always quoted to 1 s.f. (remember that a maximum of 2 s.f.s are allowed for errors).

In all cases, the size/decimal place of the least significant figure in the error determines the least significant figure in the value and therefore the precision of the measurement. The 3 values quoted are therefore of different precisions.

Finally, it would be wrong to quote these values in the following ways:

(20·33 ± 0.1) oC (value more precise than error)

136

(20·322 ± 0.0005) oC (error more precise than value)

(20·322 ± 0.125) oC (to many s.f.s in the error)

3.3 Acceptable ways of presenting measured values

3.3.1 Required format for undergraduates

Reminder: the format required by the School has already been given as (measured value +/- error) units. The subtleties of the required format will be addressed using an example, the value of a distance S:

S = (2.36 ± 0.04) km [2]

The value and error are enclosed in brackets because the units apply to both. The form above allows easy use and appreciation of both numbers and units. The alternative form (23650 ± 40) m is equally as good. The alternative form (2365000 ± 4000) cm is less easily appreciated. Using powers of 10 instead of prefixes (such as k for kilo) is certainly allowed. If a power of 10 is quoted, rather than incorporated in the units it must go outside the

brackets, e.g. R = (2.36 ± 0.04) x103 m. If a power of 10 is quoted then the exponent will be a positive or negative integer, n.

(Some publications may insist that the exponent should be an integer multiplied by 3, i.e. use 103n, but this is not something that we insisted upon for undergraduates lab diaries or reports).

The value of the quantity and its error should be quoted to the same power of 10 and in the same units so that they can be compared easily (e.g. 2.36 km ± 40 m) would not be acceptable).

3.3.2 Alternative forms that may be met

The required format above is an unambiguous style of presentation but other formats are used in which the error is not given explicitly. Students should be aware of the different ways of presenting data as they should always be clear of the errors associated with any experimental values that they meet.

Alternatives to the required format: The simplest way of indicating the precision of a measurement is through the number of significant figures quoted (as is done in the required format). Here though no error is given and an error (or precision) of 1 in the final figure is inferred. For example, if presented with a length given as 1.23 m, the inference is that in the required format it would be given as (1.23 ± 0.01) m.

Clearly there is potential for ambiguity here. For example, if there was a requirement to present all length in mm’s then with the above example there is a temptation to quoted the value as 1230 mm which is clearly wrong as the zero is not significant. The value could instead be quoted as 1.23 x 103 mm.

Although not recommended here scientists often quote one more figure than is justified by the error. In the required format this might appear as (1.232 ± 0.01) m and it is clear that the last figure is not significant. Where the error is not quoted then it is necessary to distinguish between figures that are significant and those that are not and this can be done with by placing insignificant figures in bracket or as a subscript, i.e. 1.23(2) m or 1.233 m. The reason for quoting an extra figure is to avoid introducing (a form of illegitimate) error if the value is used in subsequent calculations (see section 4 below “Calculating with measured values..”).

137

Fundamental constants and material parameters: Almost certainly the most common measured parameters that students are exposed to are the fundamental constants quoted in textbooks, lab books, data books etc. Following that may be material properties such as the speed of sound in air or the density of water. It can be forgotten that these parameters are (almost always) measured parameters and so are known to limited precision. So what to make of the values presented?

It is a fact of life that the presentation of these “known”* or “accepted”* values does lack consistency, although in many cases it is clear what has been done. For example in the School’s “Mathematical Formulae and Physical Constants” handbook fundamental constants are quoted to (mostly) 3 s.f.s. Since the constants are known to much greater precision than this, here it is obvious that the values have been rounded - and because of this the final figure has a precision (error) of 1. In addition, constants handbooks generally indicate the associated errors and often reference the source of the information. The situation is less clear for example when values are rounded but not obviously so, and it should be remembered that values quoted in old publications may be out of date.

* Undergraduate experiments often measure parameters that have well “known” or “accepted” values. The precision with which they are established lends itself to thinking that these are “true” values and they may reasonably be used this way in teaching laboratories. However, bear in mind that at the limits of their precision there may well be disagreements between the different laboratories or experiments used to determine them.

4. Calculating with measured parameters and finding overall errors (error propagation)

Sometimes in science finding the parameter that we measure directly is the main point of the experiment, sometimes it is necessary to incorporate it into a function, combine it with known constants or combine a number of measured parameters and constants. For example, the value of a resistor R can be found by measuring the current I through it and the voltage V across it and using R = V/I.

The process of using functions or combining values is usually straightforward. However, it is not obvious how the corresponding errors are determined, a process commonly known as “error propagation”. (Reminder - only random errors are being considered here.)

This section starts by considering the general case before presenting the outcomes for commonly occurring special cases.

4.1 Error propagation: the general case

The problem here is to find the overall change of a function due to (small) changes in its component parts. The answer can be found using calculus, if a value z is a function of x and y, (i.e. z = f(x,y)) partial differentiation can be used to find the effect of a small change in either x or y. (Partial differentiation is taught in the first year and the process is essentially one of differentiating with respect to (w.r.t.) one variable whilst holding all the others constant).

The partial differential of z with respect to x (holding y constant) is given by xz so that

the change in z (i.e. Δz )due to a small change in x (i.e. Δx) is:

xx

zz

[3]

138

There is a similar expression for changes in z due to changes in y and the total change in z, i.e. the “total differential” is then given by

yy

zx

x

zz

[4]

The above equation concerns two variables but clearly the number of terms on the right hand side would increase to match the number of variables in an arbitrary function. Even so, Δz in the above equation cannot be used as the combined error arising from the errors, Δx and Δy, in x and y respectively. The reason is that in the above equation the signs of both the derivatives and the errors are important. As presented then the signs of multiple terms (2 here) could lead to the situation where two large but opposite contributions cancel each other, resulting in an underestimated error.

One way to resolve this issue would be to add the magnitudes of the terms on the rhs of the equation. However, this is equivalent to having the errors contribution due to x and y always reinforcing each other which is not realistic either. Instead, the conventional solution is to square all of the terms, i.e.:

22

22

2)( yy

zx

x

zz

[5]

Δz in this equation is the overall error. The resulting errors are realistic and are often said to have been combined in “quadrature” (quadrature is often used to mean squaring).

Example. Resistance, R = f(V,I) = V/I.

The aim is to show how the overall error for resistance is found using the values and errors for voltage and current. First consider the total derivative

II

RV

V

RI

I

VV

II

I

RV

V

RR

2

1

Rearranging I

I

V

V

R

R

Squaring each term 222

I

I

V

V

R

R

This methodology used here for a quotient can be used generally and the more common results are given in the next section.

4.2 Commonly occurring special cases

In the table below one or two measured parameters (A and B) and a constant k are combined through addition, subtraction etc. to produce a result Z. The error Z in Z is then expressed in terms of the errors, A and B, in A and B respectively.

139

Table 1. Rules for finding errors when values are combined or functions used

Z = A + B Z = A - B

(Z)2 = A)2 + (B)2

Z = A B Z = A / B

(Z/Z)2 = A/A)2 + (B/B)2

Z = kA ΔZ = k

Z = k/A ΔkΔA/A2

Z = An Z/Z = nA/A

Z = ln A Z = A/A

Z = eA Z/Z = A

Note: to find the error when constants are present simply consider that the error in the constant is zero.

Example: If the length of a rectangle is (1.24 ± 0.02) m and its breadth is (0.61 ± 0.01) m. What is its area and the error in the area?

Here A = 1.24 m, A = 0.02 m, B = 0.62 m, B = 0.01 m, Z is the area and ΔZ is the error in the area, found by combining errors.

Area, Z is the product of A and B, i.e. Z = AB = 0.7564 m2.

The appropriate rule is (Z/Z)2 = A/A)2 + (B/B)2

= (0.02/1.24)2 + (0.01/0.61)2

= 2.602 x 10-4 + 2.687 x 10-4 = 5.289 x 10-4

So that Z/Z = 0.023 or Z = 0.023 x 0.7564 = 0.0174 m2

So the area can be expressed as (0.756 ± 0.017) m2 or as 0.76 ± 0.02) m2.

4.3 Important notes on performing error calculations Performing error calculations can be tedious and time consuming. But it has to be done and it is worth paying attention to the numbers. It is inevitably true that different parameters will have different contributions to the final error. Being aware of this can be useful in at least two ways: Error contributions that are significantly smaller than others may reasonably be left out

of calculations, saving time. This is easily performed by comparing the relative precision of the contributions, i.e. comparing ΔA/A with ΔB/B etc.

The relative precision of the different contributions is instructive in indicating weaknesses in the overall experiment, e.g. where to spend effort to find improvements.

140

5. Multiple measurements (of a single parameter)

5.1 Introduction As has already been hinted at in sections 2 and 3, repeated or multiple measurements (from which average values and spreads can be found) are important in experimental work associated with the reduction of random errors. In fact one of the cardinal rules of experimental work is that whenever possible repeat measurements should be made. This section is concerned with repeated measurement of a single parameter. The more common situation for physics labs is where a variable is changed and the resulting x, y data set plotted on a (preferably straight line) graph is dealt with later.

5.2 Importance of repeat or multiple measurements (of a single parameter) A single measurement of a parameter relies on (often personal) estimates of an error based on the equipment being used (for example on the smallest graduation of a meter or rule). When repeated measurements are made: The second measurement acts as a check that the first one is reasonable, i.e. not subject

to gross error through carelessness. A relatively small number of repeats indicates the range within which the true value

lies. A relatively large number of repeats indicates the range and the distribution of

measurements - and allows the (random) error of the measurement to be reduced so improving its precision.

If an estimate is made of the random error then repeated measurements can act as a test of whether this was correct and therefore that the measurement was understood.

As the number of measurements, n increases from 1 to infinity the way that the data is handled and error determined changes, however the mathematics follows statistically accepted rules*. In the following discussion attention will be paid to the number of measurements as this has clear experimental relevance. In teaching laboratories many experiments involve n ~ 8 and it is possible get away with a superficial understanding of statistics. In research the number of measurements tends to relate to the research field. In astronomy there are large numbers of stars and galaxies to examine, n can be large and there’s no escaping statistics.

* The terminology of statistics will be introduced without its mathematical justification in this document (see statistics books or further reading for more maths).

5.3 Introduction to statistics (distributions, populations and samples) In this section the terminology of statistics relating to data distributions is introduced and related to experimental error analysis/determination.

5.3.1 Distributions

As number of measurements increases, in the absence of systematic errors, we expect the mean to become closer to the true value. In other words it will always be the case that the mean of a set of values is the best estimate of the true value (more on this below). It is also reasonable to expect more values close to the true value than further away, i.e. the distribution of measurements has a central tendency and is expected to peak at or close to the true value. With a reasonable number of points the distribution can be plotted by plotting the number of points that occur in a certain interval against the measurement value. As the number of points increases the interval used can get smaller until, for an infinite number (the limiting case), the distribution is continuous and is known as the “limiting distribution”. An example of a (close to) limiting distribution is shown in figure 3 below.

141

In figure 3 the y-axis shows the number of measurements having a given value (continuous line) or number of measurements in a certain interval (bars). Often the y axis shows either the fraction of measurements in a certain interval (bar charts) or the probability of having a certain value(limiting distribution). This is achieved by normalisation - dividing by the total number of measurements. The result of normalisation is that the sum of all probabilities or the integral over all measured values will be unity in both cases.

Figure 3. Distribution of a set of data. A continuous line and three bars are shown to

represent a large number of data points.

5.3.2 (Spectroscopic ) line-shapes

Very closely related to the distributions described in the previous section are line-shapes of various origins, for example the intensity of atomic emission lines versus wavelength or the amplitude of oscillation of a resonant mechanical system versus frequency. Different although related terminology can be used to describe the two cases. The statistical terminology for distributions will be discussed later but the general terminology for line-shapes will be introduced here.

Figure 4 shows an intensity versus frequency line shape (actually the same shape as the distribution in figure 3). On the assumption (as it is not shown) that the intensity falls to zero well away from the “peak” the “full maximum” of the intensity is shown along with its full width at half maximum (FWHM). “FWHM” being independent of the intensity of the peak is a convenient and often quoted way to describe line-shapes features. A peak that is symmetrical will often be characterised by its peak intensity, position (a frequency in this case) and its FWHM. Note: The term “Half width” is sometimes used and has the same meaning as FWHM - it can be understood to mean the width at half height.

An asymmetric peak (as figure 4 is) might be additionally characterised by its half width at half maximum (HWHM) values either side of the peak position (i.e. that of the maximum of the peak).

142

Figure 4. A (slightly) asymmetric line shape, perhaps of a spectroscopic feature with its full maximum (i.e. intensity) its full width at half maximum (FWHM) and its half width half maximum (HWHM).

5.3.3 Terminology: “Populations”, “samples” and real experiments

Returning to distributions, although it is the limiting distribution that characterises an experiment, real experiments have a finite number of data points and the role of statistics is to extract the best estimates of true values and associated errors. How this is achieved will be discussed later, for now only the general principles will be of concern.

If the limiting distribution is viewed as resulting from all possible measurements then a real experiment may be viewed as a limited “sample of all possible measurements”. A single measurement then may take any value within the distribution and is more likely to be found near to the peak, i.e. the mean or true value. In many experiments it’s possible to conceive of an infinite number of repeats and this set of data is known as the “population”. In other words a real experiment takes a “sample” of a “population” of measurements.

The origin of the term population may be understood by thinking of statistics more widely. For example surveys may be made of political views in Wales. Not all people will be included, those that are constitute the “sample” whereas all possible people in Wales constitute the “population”. Likewise, in astronomy a survey may consider a sample of the (finite) population of galaxies.

5.3.4 Experimental information found from a distribution

Experimentally what is required from a sample is the best estimate of the true value, sometimes also the shape of the limiting distribution but especially its (random) error:

The best estimate of the true value is easy - it is simply the mean value of the “sample”.

The shape of the limiting distribution clearly is of interest because its width corresponds to the “precision of the apparatus”* or the “experimental precision”, i.e. in some sense it is a measure of how good the experiment is independent of the sample size (although a large sample size is required to find it reliably).

The random error (“precision of the experiment/measurement”) not only improves with increasing sample size but is also estimated differently depending on sample size.

143

5.3.5 Extraction of random error as a function of number of measurements (sample size)

It is important to emphasise that here the concern is with cases where more than one measurement is made and the random error is determined by analysing the distribution or spread of data.

The following discussion concerns an increasing number of measurements (samples) of an arbitrary experiment.

As mentioned previously a single measurement (n = 1) provides one sample of the limiting distribution and although it is more likely to be close to the true value (rather than out in the wings) occasionally the experimentalist will be unlucky.

Very quickly with n = 2,3,4.. averaging gives a lot more confidence in our estimate of the true value and more importantly for errors starts to give an idea of the limiting distribution. At this point the error will often be taken to be half the range or spread of the values (because we quote ± error).

With an increasing number of measurements a dilemma arises. The range/spread of values is likely to increase whereas the random error should sensibly decrease. One (quick and easy but statistically unsatisfactory) approach is to use the range in which 50% of the values fall to indicate (twice) the error, this is known as the “probable error”. This approach is illustrated in figure 5, it is a convenient approach to use for 8 or 12 data points where the outer 4 or 6 points respectively can be discarded.

Figure 5 Average value and probable error range from a set of eight data points

Probable error however, suffers a similar limitation to range as it does not progressively decrease with increasing n. Neither is it a required step as statistical techniques (described below) could and really should be used.

With a large numbers of measurements (let’s say n > 10) and even before a well defined distribution emerges statistical techniques are used - although cautiously because this is the regime of small number statistics.

With very high n and a well defined distribution its mean (our best estimate of the true value) can be found to high precision. In fact its error approaches zero as the number of measurements approaches infinity. What this is saying is that even when the precision of the experiment is low with enough measurements a value can be found with a low error.

x

Average (best estimate of true value)

2Δx

144

But, as you would expect it is easier to get a low error (i.e. using less measurements) when the experimental precision is high - the precision of the experiment does matter. The next section introduces the formal mathematics of this process.

Note: it isn’t easy to say how large n needs to be in order for a distribution to become well defined. However, as a guide with n ~ 50 it would be reasonable to draw a distribution split into 4 or 5 intervals (see figure 3). If nothing else it should be clear from this that in order to approach a limiting distribution n needs to be very large indeed.

5.4 Formal statistics (of distributions)

All experimental results are affected by random errors. In practice it turns out that in the majority of cases the distribution function which best describes these random errors is the “normal” or “Gaussian” distribution. Other mathematically described distributions include “Poisson”, “Binomial” and “Lorentzian”. Distributions such as the one presented in figure 3 may not have a basis in mathematics.

Reminder: statistics work well with large but not small numbers of measurements - the term “small number statistics” doesn’t have a poor reputation for nothing.

5.4.1 The mean

If n measurements of a quantity x are made and these are labelled x1, x2, x3,….xn then the mean is given by:

n

iinn x

nxxxx

nx

1321

1)...(

1 [6]

Often used alternative symbols for the mean include x and μ.

5.4.2 Mean square deviation (variance) and standard deviation(s)

Individual values of xi will differ from �����and these differences are intrinsically linked to

the nature of the distribution. The deviation of a particular measurement, xi from nx is

given by

nii xx [7]

Deviations may be either positive or negative and both the sum the mean deviations will be zero. To avoid this the absolute value of mean deviations could be used but it makes more sense mathematically to use the square of deviations. The sum of square deviations would simply increase with the number of measurements whereas the mean value would be expected to converge to a value representative of the limiting distribution. The mean square deviation of n measurements of x, �� or s2 is known as the “sample variance” and is given by

��

� = �� =1

�� ��

� =

���

1

��(�� − �����)�

���

[8]

From this it is a short step to the root mean square deviation, normally known as the “sample standard deviation”, �� or s:

� = �1

�� ��

���

�/�

= �1

��(�� − �����)�

���

�/�

[9]

Strictly speaking the term 1/n should be replaced by 1/(n-1) associated with degrees of freedom and so that we should use:

145

� = �1

� − 1� ��

���

�/�

= �1

� − 1�(�� − �����)�

���

�/�

[10]

The term “sample standard deviation” is used since it is calculated from a sample of n measurements. With an infinite number a samples and the “true” distribution the terms used are variance, �� and standard deviation, �.

Standard deviations are useful quantities as they have the same units as the measured value and relate to the width of the distribution and are often described as the precision of the measurement. However, as hinted above there is more to this story.

5.4.3. Standard error (standard deviation of the mean), ��� or ����

As discussed above, the standard deviation gives a measure of the width of a distribution, whereas what is required is the error in the mean value, a value that can become very small as the distribution is better known (through increasing the number of measurements n).

The error in the mean will be taken as given by the “standard error”. Mathematically, the standard error is the “standard deviation of the mean” and is found from the sample standard deviation and n:

���̅ = ��̅ =�

��/� [11]

However, here it is now possible to state that the value for a measurement, X can be expressed as

� = ����� ± ���̅ [12]

In experimental terms the 1/n1//2 dependence of the standard error (for large n) indicates that although it is possible to use repeats to find a value to high precision/small error this is hard work and it is often better to work on improving the precision of the measurement.

5.5 Summary - what to use as the random error (precision) as a function of n

Single measurement - estimate of error. Small number of measurements - whilst using best judgement: the range of the data

might be used for a very small number of measurements; with a few more measurements (and possibly taking convenience into account) choose between probable error and possibly standard deviation or standard error).

Large number of measurements - with the distribution emerging use standard error.

Some of the first year lab experiments are designed to illustrate how this works in practice. However a guiding principle is to be open and clear about what error is chosen and why.

146

6. Multiple measurements: straight line graphs (y = mx +c)

6.1 Introduction The previous section discussed multiple measurements of the same value. However this is not how undergraduate laboratory physics experiments are usually performed. If a quantity y depends upon another x, then rather than fixing on a value of x and making repeated measurements of the corresponding value of y, it is usually much more revealing to vary x. The form of the dependence of y upon x is then most simply demonstrated by plotting a graph. The statistics of repeat measurement in section 5 still applies but in a modified form - think of the different points as being, in some sense, a repeat.

The understanding and use of graphs is an essential skill. Introductory teaching laboratories concentrate on using straight line graphs, which are by far the easiest to analyse, and great efforts are made to ensure that graphs emerge in this form.

6.2 Presenting experimental data on graphs

Scientific experiments examine cause and effect relationships where changing one variable (known as the independent variable) causes a change in a second (dependent) variable, both of which are measurable.

(Important: Conventionally the independent variable is plotted on the horizontal (x) and the dependent variable is plotted on the vertical (y) axes of the graph respectively.)

For example, how the length of a spring depends upon the weight hung from its end may be studied. The length is the dependent variable so it is plotted on the y axis, as in figure 6.

length / m

weight / N

0 1 2 3 4 5 6 7 8 9

0

0.1

0.2

0.3

0.4

Figure 6. Example graph, spring length versus weight (the line through the data is a “best fit” line).

On the graph, as is quite common, a line through the data is shown. The meaning of any such line should be made clear; in this case the figure caption indicates that the line is a “best fit”. In other words it is the straight line that best represents the data and from which information is extracted. In this case, from the gradient a value for the spring constant may be determined. The alternative is that a line is a “guide to the eye”, this is a line with no scientific meaning. In a lab diary this information can be given at any convenient place on the graph, in a report inclusion in the figure caption is usually best. Error bars can also be included on graphs, this is discussed in a later section.

6.3 Finding the Slope and Intercept (and their errors)

The equation for a straight line is given by:

y = mx + c [13]

147

where m is the gradient (or slope) of the line and c is the intercept with the y axis. It is necessary to find values and errors for both, and two approaches are possible.

6.3.1 The two approaches

By hand, where a graph (drawn in a lab diary) is analysed using the judgement of the experimentalist. This approach, although subjective, gives students an understanding of the process of data analysis and it keeps students “close” to the data. Both of these are an essential part of the process of equipping students with the skills and experience to develop as a scientist.

By computer, where the data is fed into software (such as EXCEL or coded in Python) that graphs and analyses the data. This approach has the advantage of using well defined statistical techniques and, in these terms at least, giving consistent answers. There are a number of disadvantages not least that students lose their critical faculties and tend to believe any number emerging from a PC or calculator (regardless of the quality or nature of the data entered).

In the UG laboratories students generally use the by-hand approach until the end of the year one autumn semester, but from then on computer analysis is gradually introduced.

6.3.2 Finding gradient, intercept and their errors by hand

The approach is illustrated in figure 7. Having judged the best straight line, the gradient m and the intercept c can be determined. Two well separated arbitrary points on the best fit line are determined (x1,y1 and x2,y2). This is a statement that it is the best fit line that represents the experiment (students are often tempted to use extreme measured data points - this is incorrect). From the two selected points the gradient can be calculated:

12

12

xx

yy

dx

dym

[14]

c can then be found using the straight line equation, m and either of the two points (or indeed any point on the best fit line):

mxyc [15]

For clarity a right angled triangle is drawn linking the two chosen points on the best fit line.

Figure 7. Determining m (= dy/dx) from a best fit line. Note that x1 and x2 are points on

the best fit line , i.e. they are not data points.

148

Finding the errors is achieved by repeating the above procedure for one or two other straight lines which are as far away in gradient (one larger, one smaller) from the data as possible, but which are judged to be nevertheless still reasonably consistent with the data. These are known as “worst possible fit lines” or “worst fit lines”. As shown in figure 8 the lines should pivot about the approximate centre of the data points. These lines provide two further values for m and c from which errors in m and c can be estimated. With best and worst fit data the errors in m and c are given by:

error in m ∆� = ��������� − �������� [16]

error in c ∆� = ��������� − �������� [17]

In practice it is allowable to use one worst fit line, this saves time and is justified since it is error estimates that are found.

One approach to judging the position of the “worst fit” line is to draw (as shown by dotted lines in figure 8) or imagine the presence of lines parallel to the best fit that encompass the spread in y values above and below the best fit and have the same x range as the data. . In effect these worst fit lines provide estimates of the possible range of deviations in m and c. and whilst useful are extremely pessimistic.

Remembering back to Gaussian distributions arising from repeated measurements of the same value, with more measurements the errors in m and c must decrease. Whereas, with this simplistic approach, more measurements are likely to sample a larger spread about the best fit line and therefore result in slowly increasing errors. When performing analysis via computation, as described in the next section, standard errors in m and c that account for the number of data points are found. This explains why errors found by hand are much larger than those by computation.

Figure 8. Best and worst-possible fit lines used to estimate errors. The lines pivot about the centre of the data range. The dotted lines have the same gradient as the best fit lines

but are at the extreme of the random error in the y values of the points.

Estimates of the standard errors in m and c and rough agreement with computation can be found approximately by dividing the by-hand estimates by n1/2. where n is the number of data points (dividing by (n-2)0.5 is probably better but the worst fit lines are generated by eye so let’s not worry).

149

6.3.3 Finding gradient, intercept and their errors by computation This section gives the mathematics for determining gradients, intercepts and their errors using a linear regression technique known as least squares fitting of a straight line. It may be useful to think of the best fit line as the “true value” with points distributed about it.

Given n pairs of experimental measurements (x1,yl), (x2,y2) ......... (xn,yn), which have a fixed errors in the y-values but no, or insignificant, errors in x values*, the gradient (m) and intercept on the y axis (c) of the best straight line (y = mx + c) through these points can be found by minimising the squares of the distances of the points from the line in the Oy direction. The minimum is found by differentiation and this leads to the analytical expressions that follow.

With the summations from i = 1 to i = n and defining (following Squires)

cmxyd iii the “residual” for the ith data point

(the deviation in y for each data point - from the best fit line)

ixn

x1

iyn

y1

2i

2i )x(

n

1- xD iiii y x

n

1 - )y(xE

2i

2i )y(

n

1- yF

Then

D

Em

2

222

2

1

2

1

D

EDF

nD

d

n)m( i

xmyc 2

22

222

2

1

2

1

D

EDFx

n

D

nD

dx

n

D

n)c( i

Mathematical software might have this programmed in, but many, EXCEL for example, give the “product-moment correlation coefficient”, R (actually R2 is usually given) that is a quality of fit (with R = ± 1 or R2 = 1.0 representing a perfect fit/correlation). This is insufficient as error values are required.

DF

ER

22

With the constraint that the straight line is required to pass through the origin (0,0), c = 0, the best value for m is

m =

(x y )

x

i i

i2

with error

21

2

2222

1

2

)n(x

x)yx(my)m(

i

imiii

However it isn’t at all clear when this may be used. It certainly should not be used on the basis that an equation indicates that a straight line graph is expected to go through the origin. A systematic error in the experiment might shift data such that the gradient is unaltered but the line does not pass through the origin. Then the consequences of forcing the line through the origin are to lose information on the presence of systematic errors and at the same time to introduce a systematic error into the gradient.

Advice: There may be times when forcing a line to go through the origin is useful; so try both approaches and then consider what the values tell you, i.e. be careful.

150

* This draws attention to an important point concerning statistical analysis. Insignificant errors in the independent variable is often true experimentally (where the value of x is set and the value of y measured) but it is also and is a necessary condition for the commonly used statistical treatment of errors in gradient and intercept (software that calculates errors in gradient and intercept almost certainly make this assumption). Treatments are much more involved if the errors in both y and x are significant or if the error in individual points varies.

6.4 Error bars (and outliers)

When plotting graphs it can sometimes be useful to include “error bars”. An error bar is a way of drawing (an estimate of) the (random) error in the measured value of each data point on the graph. It is illustrated in figure 9 for the case where only the errors in y are significant and it is implied that the errors on x are insignificant. If the x error is significant a horizontal bar should be included.

y

x

Figure 9. Example, use of error bars. The line is a best fit that excludes the outlier (the point significantly below the best fit line and therefore ignored from the analysis).

Error bars are generally only included where there is a clear benefit compared to their absence: not only do they take time to insert but they also complicate graphs (especially a problem in lab diaries where best fit and worst fit lines (if drawn by hand) are present.

Before discussing the cases where there are “clear benefits of error bars” it is worth dwelling on what they represent. It is possible to use error bars to represent random errors, systematic errors, a combination of the two. Although convention is that they represent random errors students will be expected to explain their origin and meaning of the error bars whenever they use them.

6.4.1 When to use error bars

Testing understanding of the measurement Suppose that the error bars in figure 9 were the estimated random error for a single measurement. The fact that the scatter in the data points about the best fit line is of the same size as the error bars supports the view that the experimental errors are well understood. Error bars significantly larger than the scatter would be of concern.

Significance of deviations from theoretical curves The theoretical curve that the data is compared to here is a straight line. Here error bars make it easier to decide whether deviations from a straight line are significant or not. (In scientific jargon anything that is “insignificant” is small enough to be ignored) This is illustrated in figure 10 a and b which show the same set of data but with different error bars. It will make more sense here to consider that the error bars result from repeated,

151

rather than single, measurements at each point. In addition note that the arguments below apply whether the error bars represent the range, standard deviation or standard error.

Figure 10. (a) Data with best fit line and large error bars, (b) the same data shifted

(down) with small error bars (a.u. - arbitrary units)

As with any experiment there is scatter in the data. In figure 10a the error bars all encompass the straight line and therefore the deviations from the best fit line cannot be considered significant. By contrast in figure 10b with smaller error bars the deviations must be considered significant and the implication is that either: (i) the theoretical model is incorrect or (ii) that there are additional unknown or unconsidered experimental factors causing a deviation.

The above discussion illustrates both the importance of careful consideration of errors and also that extra information is revealed as errors are reduced.

Final note: here the deviation of a number of data points was considered. The significant deviation of a single data point is treated a little differently (see also outliers below).

Significant errors in both y and x and a variation of size of error bars Since the commonly used analytical method of determining line of best fit and errors in m and c is based on the errors in each point being significant only in y then the cases where this does not apply need to be treated with care. A first step towards dealing with (or at least acknowledging) this is to provide x as well as y error bars when appropriate. The error analysis required when the errors are significant in both x and y is beyond the scope of this document.

Similarly the commonly used analysis assumes that the y errors are the same for each data point and a first step towards acknowledging when this is not so might be to show these varying error bars. Situations where varying errors may occur: Errors based on repeat measurements will vary if the number of repeats is varied. Some experimental conditions might naturally lead to varying errors (for example, the

determination of frequency from a fixed number of oscillations). When combining measurements to obtain a “y” value.

6.4.2 Outliers

0

2

4

6

8

10

12

14

16

0 2 4 6 8 10

x values /a.u.

y v

alu

es /a

.u.

(a)

(b)

152

Returning to figure 9 in drawing the best fit line only 5 points were taken into consideration, whilst the 6th (the point below the line) was excluded. An excluded point is known as an “outlier” and clearly points should not be categorised as outliers lightly.

Potential outliers may sometimes occur due to a mistake in a reading or the setting of an experimental condition and care must be taken when dealing with them. Working on the assumption that the first indication of a presence was on plotting a graph (probably in a lab diary): First check that all arithmetic and the plotting of the data point was performed correctly. Do not rub the point out or ignore it - apart from anything else it may in fact be correct. Make a decision about whether to include or exclude the point from analysis (i.e.

whether it is treated as an outlier or not) and indicate this clearly. If possible determine whether an error was made in the measurement - by going back

and performing repeats (this isn’t usually possible in year 0 and 1 labs, is often possible in year 2 and is essential in year 3 and 4 projects).

The earlier an outlier is spotted the easier it is to perform repeat measurements. This is aided by drawing graphs as quickly as possible. The ultimate is to draw graphs as you go along. Computers are very useful here but very rough sketch graphs are useful alternate.

Consideration of whether a point should be considered as an outlier takes us back to error bars. In figure 9 it is somehow reassuring that the line of best fit passes through the 5 good data points within their error range as indicated by their error bars. It appears reasonable to ignore the outlier in the determination of the best fit line because it would be impossible to include this point on the same basis (although with much larger error bars the outlier might be included). However, the scatter in the data is also sufficient to make this judgement and in reality the error bars do not add anything.

6.4.3 Dealing with a small numbers of data points Clearly it is better to have many data points rather than few but what are the implications of cases when this isn’t possible? Return to figure 9 and consider having not 6 but 3 or even 4 data points one of which is the outlier: The scatter in the data is not obvious from the points alone. (Correct) error bars become more important. It is difficult or impossible to identify outliers. The values obtained for m and c are (almost always) less accurate and their errors larger.

6.5 Forcing lines to be straight It is almost always possible to manipulate the mathematical form of data such that an easily analysed straight line results when it is plotted. Essentially the approach is to obtain a relationship in the form y = mx + c. A simple example and two experimentally very important examples are given in table 1.

Table 1. Example methods for making straight line plots

Function Plot (y = mx + c) Comments y = 2x2 y vs x2 A very simple example

W = kTn log10W vs log10T (log10W = log10(kTn) = nlog10T + log10k)

Used in determining unknown power relationships (finding n).

y = Ae-E/kT lny vs 1/T Known as an “Arrhenius plot” it is used when considering thermally activated processes

with an activation energy (E).

153

154

7. Some experimental considerations

It is too large a subject to consider what constitutes a good experiment, i.e. one that can be believed. Here a flavour will be provided by first introducing some of the terminology that is used before providing two useful examples making use of what has gone before.

7.1 Terminology

The “reliability” of a measurement relates to its consistency. Otherwise known as the “repeatability” of a measurement, it is the extent to which an instrument can provide the same value for nominally the same measurement (i.e. the same subject under the same conditions).

The “validity” of the findings of an experiment refers the extent to which the findings can be believed to be right. For a particular experiment this depends on the rigor with which the study was conducted (as assessed through the experimental design, its reliability and the care in its execution) but also the extent to which alternative explanations were considered.

7.2 Comparing results with accepted values In the year 0,1 and 2 teaching laboratories, it is common for measurements to be made of known values (such as g) allowing a comparison with the results obtained. A downside of this is that students may perceive that the result (being already known) is not important and instead the point is practice of a technique and seeing physics in action. This is incorrect, whatever the result, it sheds light on the experiment.

Remember that any result is presented as: (measured value +/- error) units. This allows comparison with the known values and if the two agree within errors (i.e. within the error range of the measured value) then there is nothing more to say. However, if the two do not agree within errors there must be a reason and it is necessary to consider what this might be.

Candidates include: Systematic errors in the measurement or equipment. Misjudged random errors. Poor experimental technique. Poor or inappropriate (possibly oversimplified) theory.

If the reason for the discrepancy is properly understood and subsequently included then agreement should be possible. Whilst such an extra analysis is likely to be beyond the expectations for 0 and 1st year labs it is important that students think about the situation, and it is often true that the reason for the discrepancy is known in principle.

A link can also be made to more advanced work where it is essential that accurate measurements of unknown values are made. If measurements of known values (possibly standard samples or “standards”) are made first then any systematic errors can be corrected for. The known samples provide a way of calibrating the instrument.

7.3 y = mx relationships

Previous discussion of straight line graphs have been concerned with the general case (y = mx + c relationships). However, many expected relationships are of the form y = mx, in other words the graph produced is expected to go through the origin. This is worth special consideration as it often causes confusion for inexperienced experimentalists.

The main issue is that students not only include the origin as a data point but also give it special significance by forcing the best fit line to go through the it (whether by hand or on a computer).

155

One of the classic systematic errors is a zero offset the effect of which is to produce a constant (solid) shift of all data point either up or down whilst leaving the gradient (from which most information is found) unaffected. Excluding the origin from analysis allows the y intercept to be compared to zero and so the significance of a possible zero offset to be considered. The alternative such as forcing the best fit line through the origin both removes evidence for a possible zero offset and if there is one alters the gradient so introducing an (illegitimate) error into the gradient.

8. Some important distributions A number of distributions are observed in experiments, three important ones described here are the Gaussian (or Normal), Poisson and Lorentzian. The former two distributions can be related to the Binomial distribution and so this is introduced first.

In all cases the probability function P is given using x, μ and σ as the measured value, the mean and standard deviation of the distribution respectively. The functions are normalised

such that 1)( dxxF .

8.1 Binomial statistics Binomial statistics describe certain situations where results of physical measurements can have one of a number of well-defined values - such as when tossing coins or throwing dice. Consider a situation where the result of one physical measurement of a system has a probability p of giving a particular result. If an experiment is carried out on n such systems, then the probability that x of the systems will produce the required result is given by.

xnpp x !xnx!

n!pn,x,P

1

An example: The probability of throwing a six with one dice is 1/6. If we throw 4 dice we may obtain 0,1,2,3 or 4 sixes. The probability of obtaining zero sixes is given by substituting in equation 1 above so that

Probability of zero sixes with 4 die = 48004

6

50

6

1

040

4

6

140 .

)(

)!(!

!,,P

Similarly the probability of throwing one six is

etc 0.386 )!(!

!,,P

)(

141

6

5

6

1

141

4

6

141

For this distribution the mean value is np and the standard deviation is p)np(1

8.2. The normal (or Gaussian) distribution

As already mentioned the distribution function which best describes random errors in experiments is the “normal” or “Gaussian” distribution. This distribution is an approximation to the binomial distribution for the special limiting case where the number of possible different observations is infinite and each has a finite probability so that np>>1.

The normalised probability function P(x) given by:

�(�) =

1

√2�

1

���� �

− (� − �����)�

2���

156

where, as before, x is the measured value nx is the mean of the sample and � is the standard

deviation and the function is normalised such that 1 dx)x(P . As the example figure

11 shows the function is (characteristically) bell shaped and symmetrical.

Figure 11 Gaussian probability function generated using 0nx and �= 1 resulting in the

x-axis being in units of standard deviation. The FWHM for the distribution is also shown and can be seen to be wider than 2σ.

If �̅ and � are known the whole distribution function can be drawn and the probability of measurements occurring in a given range can be determined. The integral of the Gaussian function cannot be performed analytically and so many statistics books will contain look-up tables, a summary version of which is presented in table 2.

Table 2. The Integral Gaussian or Normal probability

Range either side of mean, in terms of +/-m�

Expected percentage of values in range

m = 0 0% m = 1 68.3% m = 2 95.4% m = 3 99.73% m = 4 99.994%

From this table it can be seen that quoting an error of ± � would cover a range in which ~68% of the values fall which will therefore give a similar estimate of error as the “probable error” in which 50% of the values fall. The FWHM is also worth considering in this context as experimentally it is often more direct and convenient to deal with then standard deviation. It is clear from figure 11 that the FWHM covers a little more than the range of of ± � (in

fact FWHM = 2�2��(2)� ≈ 2.355�). This corresponds to a range in which ~76% of the values fall. Any of these three might be used as an estimate of the error - in the case where a small number of measurements have been performed.

8.3 Poisson distribution The Poisson distribution is the limiting case of a binomial distribution when the possible number of events (n) tends to infinity and the probability of any one event (p) tends to zero

FWHM

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

-4 -3 -2 -1 0 1 2 3 4

x

P(x

)

FWHM

157

in such a way that np is a constant. The normalised distribution is given by:

x!

ex

P(x)

where P(x) is the probability of obtaining a value x, when the mean value is μ. The standard

deviation for a Poisson distribution is . This distribution is unlike the normal or

Gaussian distribution in that it becomes highly asymmetrical as the mean value approaches zero.

Poisson distributions are often appropriate for counting experiments where the data represents the number of events observed per unit time interval. A gram of radioactive material may contain ~1022 nuclei whereas the number that disintegrate in each time interval is many order of magnitudes smaller.

This covers a very wide range of physics experiments: In the teaching labs - radioactive decay, x-ray absorption and fluorescence. More widely - Spectroscopy, particle physics (such as at the LHC), astronomy.

Counting experiments: the “signal to noise” ratio In all counting experiments the “quality” of the data is expected to “improve” with increasing counting time and counts. This can be understood as follows: the mean number of counts in the experiment, μ, is the “signal” whilst statistical variations in this signal are represented by the standard deviation σ and can be thought of as “noise”.

In Poisson statistics σ = therefore the signal/noise = / , i.e. the ratio

increases with the square root of the number of counts. This is an often quoted and very important finding for understanding and designing experiments. Put another way, if in a particular counting period an average of N counts are obtained, the associated standard deviation is N (ignoring any errors introduced by timing uncertainties, etc). Clearly, the larger N the more precise the final result. For a given source and geometrical arrangement, however, N can be increased only by counting for longer periods of time.

8.4 Lorentzian distribution This distribution (shown in figure 12) is important as it describes data corresponding to resonance behaviour. This includes mechanical and electrical systems but also the shape of spectral lines occurring in atomic and nuclear spectroscopy. Care must be taken as it describes intensity rather than amplitude distributions; however amplitudes are easily converted to intensities (as intensity is proportional to amplitude squared).

158

Figure 12 Lorentzian probability function generated using 0nx and Γ = 2.355.

The Lorentzian distribution is symmetric about the mean, is usually characterised by its full width at half maximum (aka “half width”) rather than by its standard deviation and is given (normalised) by

/x

/2,x,P

2 22

1

where P(x,μ,Γ) is the probability of obtaining a value x, when the mean value is μ.

The FWHM used in figure 12 was chosen to match that of the Gaussian in figure 11. Comparison of the two reveals that a characteristic of this distribution is that it has “heavy tails”, i.e. it falls away slowly for large deviations. A consequence of this is that it is not possible to define a standard deviation for this function.

It should be noted that a number of broadening mechanisms may be effective in spectroscopic experiments and some of these, such as Doppler broadening and also the resolution of the system may be Gaussian in nature. What is measured may therefore be a convolution of a Lorentzian and a Gaussian function resulting in a so called “Voigt” profile. Experimentally, it is usual to start by assuming a Gaussian line shape, deviations away from this in the tails is often good evidence of a Lorentzian contribution.

159

III.3 USING MICROSOFT EXCEL and WORD You will be required to usebasic graph plotting and wrd processing packages throughout your undergraduate studies and on into the rest of your life so you should gain familiarity with using them as quickly as possible. All Cardiff Univeristy networked compters carry the Microsoft suite, although you can certainly use other packages if you would rather. However we will teach you (and can provide guidance with) Word and Excel software. To further assist you in this, we have provided in Learning Central – General Support module a number of screencasts that you mayfind useful: The Basics of Scientific Reports; More on Scientific Reports; Using Excel - graphs; Using Word – Equation Editor; Using Word – formatting.

USING EXCEL 1. Determining errors from straight line graphs using EXCEL Instructions

Input the data to be analysed into an EXCEL spreadsheet in column form.

Select a 2x2 array of cells anywhere in the spreadsheet (these are the ones highlighted in the figure below).

In the function/command line type “=linest( ” - presumably “linest” stands for line statistics.

Opening the bracket leads EXCEL to prompt for known_y’s simply select using mouse, then insert a comma. known+x’s simply select using mouse, then insert a comma. const input 1 (using 0 would force line through the origin) and a comma. stats input 1 (this sets the correct statistics) and close the bracket.

The command line should look something like: =LINEST(A5:A14,B5:B14,1,1)

To execute the calculation press CTRL,SHIFT and ENTER

Values for m and c and their errors should appear in the selected 2x3 array in the format shown in the figure below. The “m”, “c” “errors” “R^2” and “reg error” labels have been added for clarity.

In this case the gradient is m = 2.60 ± 0.04 and the intercept is c = -1.2 ± 1.6, i.e. the straight line passes through the origin within the (standard) error.

R^2 is the same value as appears on graphs when adding trend lines: it is a correlation coefficient indicating how good a straight line the data represents.

“Reg Error” is short for “regression error”; it is the standard error of the measured y values compared to the best fit y values. It is analogous to the standard error for repeated measurements of the same value where values are then compared to the mean of the values.

160

Least squares fitting of straight line data

The data

x x^2 y

0 0 0 m c

1 1 2 2.60301 -1.18577

2 4 11 errors 0.042074 1.647517

3 9 21 0.997914 3.572721

4 16 42 R^2 reg error

5 25 63

6 36 93

7 49 120

8 64 162

9 81 216

Figure Appearance of EXCEL spreadsheet when determining errors in a straight line graph. The selected 2x3 array of cells (in which values were eventually returned) are highlighted.

2. Making graphs in EXCEL 2007

EXCEL 2007 is substantially different from previous versions and this has caused students (and staff) some problems: there are more options so things are generally a bit more difficult to find.

To help, some guidance on basic graphing tasks is given below.

To make a basic graph

Select two or more columns of data either by clicking and dragging or by selecting a column holding down control and selecting additional columns. The left hand column will be the data for the x-axis no matter what order the data is selected.

Select “insert” on the toolbar

Select type of graph (usually “scatter”).

To add titles*

With graph selected, in “chart tools” click on “Layout”.

Here click on “axis title”. For the y axis (primary vertical axis title) it is probably best to use “rotated title”.

You may also want to add a “chart title” (for your diary but not for inclusion in reports!).

*You don’t seem to be able to add equations to titles but you can use Word-like formatting: “CTRL =” for subscripts, “CTRL +” for superscripts.

To change the range of data shown

Either select the axis or choose “format axis”.

161

Or, under “Layout” choose “Axes”, then the axis of interest, then (at the bottom of the list) “More… axis options”.

Under “axis options” change minimum and/or maximum to fixed (from auto) and select desired value(s).

Formatting data series (line and marker)

Right click on the required data series on the graph and then choose “format data series” and choose from the “series options”.

For example to change marker size choose “marker options” set marker type to “built in” then set “size”.

Alternatively, with the graph selected: under “layout” the required data series can be selected by use of the drop down box in “current selection” (on the left of the toolbar).

162

III.4 REPORTING ON EXPERIMENTAL WORK

AN EXAMPLE OF HOW TO WRITE A LONG REPORT 1. Introduction

Scientific report writing is a skill, the application of numerous rigid conventions, in combination with a surprising degree of freedom in structure, combined to achieve clarity of presentation.

Physics students will write such reports at a rate of approximately one per semester throughout their undergraduate University career. For many students the feedback this provides may be insufficient for them to efficiently get to grips with what is required and expected. The document is based around a specimen report the examination of which is intended to help students in writing long reports.

“Galileo’s Rolling Ball Experiment” is a Preliminary (Year 0) experiment and also a classic experiment of physics. It is performed in a three hour laboratory session in which students are required to both take and analyse their data (diaries are handed in at the end of the session). It is a simple experiment used to help develop data handling and error analysis for people some of whom are new to performing physics experiments for themselves. Consequently the report is rather basic.

Following this introduction, the main body of the report is split into three sections:

2. Teaching Laboratory instructions for the experiment

3. The specimen report based on students’ laboratory diaries

4. A final section on report writing that discusses some of the finer points and the School’s changing expectations of students as they progress through their Physics courses.

2. Teaching Laboratory instructions for the experiment

G2 GALILEO'S ROLLING BALL EXPERIMENT Reference: Duncan, Chapter 7, Statics and Dynamics, Chapter 8 Circular motion and gravitation Equipment List: Metal channel, retort stand, ball bearings and box, stopwatch, metre rule. Introduction Galileo Galilei made observations in astronomy and mechanics that were of major importance to the development of 17th century science. Perhaps Galileo's most famous experiment, which was supposed to involve the leaning tower of Pisa, was his verification that all bodies, independent of their mass, fall at the same rate (if the bodies are heavy enough that air resistance is negligible). We shall look at here one of Galileo's less famous but closely related experiments which conveniently does not require dropping weights from the tower of Pisa!

163

Galileo performed an experiment on a falling body that 'diluted' the effects of gravity, by letting the body roll down a slope. Galileo predicted and was able to show experimentally that in this case: 1) No matter what the angle θ (this is the Greek letter theta) of the slope, the speed of the object at the bottom of the slope depends only on the total height h it has fallen through. 2) The speed of the object increases in proportion to the time it has travelled. 3) For a given angle of the slope, the vertical height h fallen is proportional to the square of the time it has travelled. Since this was true for all the slopes that Galileo was able to measure, by imagining the steepness of the slope to be increased until it was vertical he predicted that these rules would be true for a freely falling body. Imagine yourself in Galileo's position. Mechanical watches had not yet been invented. He had to use 'water clocks' in which time was measured by water escaping from the bottom of a conical container. Standards of length differed across Europe. Also, he calculated, not with decimal fractions, but with whole number ratios. (See the article by S Drake in the American Journal of Physics, p302, volume 54, April 1986, if you are interested in the historical details). Your experiment here will be rather easier than Galileo's!

h

Start (t=0)

Finish

In this experiment we shall be concerned with investigating the third statement only. Referring to the above diagram, Galileo's third statement can be expressed mathematically as

h α t2 (if θ is fixed) (Eq. 1)

Here t is the time for the object to roll from the start to the finish, and the symbol α means "is proportional to". (The constant of proportionality depends on the strength of the Earth's gravity and the angle of the slope). The aim of this experiment is therefore to check the above relation. The experiment provides a good introduction to taking measurements, presenting information in tabular and graphical form, and the consideration of errors of measurement. Additionally, you will need to relate your experimental data to theory presented in a mathematical form.

164

Experiment (read this to the end before you start) You are provided with a channel which can be inclined at any angle. You should use the following procedure, making sure you record all the details in your laboratory notebook. STEP 1 - First fix the value of θ at a value between 2 and 15 degrees. (If θ is too large then it is difficult to time the fast-moving ball, whilst if it is too small the effects of friction will be more important). Measure sin θ for the slope and estimate its error (see below). Since all your measurements will be made at the same angle it is very important to perform this carefully. In subsequent calculations you will use sin θ and its error but you should also find θ and its error) at this point STEP 2 - Hold the ball at a convenient position along the channel and measure h. STEP 3 - Measure the time t that it takes the ball to roll down the slope for a starting height h. Repeat the measurement 3 times and record each result. STEP 4 - Repeat steps 2 and 3 for eight different values of the starting height h. Make sure that you neatly tabulate every measurement that you make (not just the averages). Your table should have the following columns:

Height h /m t1/ sec t2/ sec t3 / sec t1²/ sec2 t2²/ sec2 t3² / sec² t2 (average) / sec2

... ... ... ... ... ... ...

... ... ... ... ... ... ...

... ... ... ... ... ... ... Always include the units when you write down any numerical value. Some suggestions It is difficult to accurately measure the angle θ with a protractor! The best way to find it is to measure H (the change in height of the end of the channel above the bench) and D (the total length of the channel) shown in the diagram below. (Do not confuse the symbol h with H or d with D, also shown on the diagram!) Then sin θ = H/D, so you can calculate θ . Remember to tabulate all the measurements you make, not just θ

h

H d

D

165

Precision Estimates In all measurements you make, you should write down the precision of the measurement - ie could you measure h, H and D to the nearest millimetre, centimetre, or metre? (This depends on how you measure the quantity as well as the fineness of divisions on the metre rule. For example, can you tell exactly where the centre of the ball bearing is, and can you position the ruler easily? The golden rule is use common sense when estimating the precision of a measurement. Analysis Equation 1 can be written in another, exactly equivalent, form:

t² = K × h (if θ is fixed) (Eq. 2) Because t² is proportional to h, a graph of t² (plotted on the vertical axis) against h (plotted on the horizontal axis) should give a straight line, which passes through the origin, with a gradient equal to the constant K. STEP 1 - From your data in the tables of t² and h, plot a graph for your value of θ . STEP 2 - Draw a straight line, which best fits the data points. Work out the gradient of this line (don't forget the units). Draw the 'error lines' and so work out the error in the gradient. Does your best fit line pass through the origin? The data you took can be used to work out the acceleration due to gravity, g. This can be done since the constant K in equations 1 and 2 is, according to theory (see appendix), related to g and sin by the formula:

K = 2 / (g sin² θ) (Eq. 3) So, to find g, just do the following: Work out sin θ (it's just equal to H/D) and K (the gradient of the corresponding graph you plotted) and substitute into equation 3, after rearranging it to make g the subject of the equation. Be careful to make sure you know what units K is measured in. What value of g do you get? Even taking errors into account* the value is probably around

half the accepted value of 9.8 ms-² ? Can you think of any reason why this should be so? (* If you need to, ask a demonstrator to explain how to calculate the errors in g - you will need to estimate the experimental error in each of the things that was used to find g, ie the individual errors in sin and K, and then combine the errors. Actually, you will probably find there is comparatively little error in sin so that most of the error is in finding K.) Appendix Read this at home, not in the laboratory class. You may find it useful in conjunction with your Mechanics lectures. Suppose a body slides, without friction, down a slope of inclination θ :

166

h

Finish

mg sin

mg

The component of the force on the mass m parallel to the slope is mg sin θ , so the acceleration of the body parallel to the slope is

a = F/m = g sin θ Using the formula "s = ut + at² /2" means that the distance moved to the bottom of the slope in a time t is just (u=0 if the body starts at rest)

d = g sin θ × t² /2 But sin θ = h/d or d = h/sin θ , so we finally get

h = g sin² θ t² /2 (Eq. 4) This equation is therefore the same as equation 2, since we can re-arrange it as

t² = 2 h /g sin² θ (Eq. 5) So, comparing directly to equation 3, we have K = 2/(g sin² θ ), as stated earlier.

3. The specimen report based on students’ laboratory diaries

(A report based on measurements made by a Foundation Engineering Student taking PX0102 in October 2006 is on the following page)

167

Galileo’s Rolling Ball Experiment

Date: January 2007

Author: Cardiff University, School of Physics and Astronomy

Abstract

Galileo’s rolling ball experiment was performed in which the motion of a ball bearing down a shallow incline, of angle = 3.52 +/- 0.03 degrees, was timed as a function of the starting height of the ball. Starting heights between 0.035 and 0.070 m resulted in travel times in the range 1.90 – 2.90 s. As expected, a graph of the square of the time of travel versus starting height was a straight line that passed through the origin. The gradient would be expected to be 2/g.sin2 , where g the acceleration due to gravity, assuming that the gravitational potential energy was entirely converted to translational kinetic energy. The value of the gradient was found to be 113+/-19 s2.m from which a value for g of 4.76 +/- 0.12 m.s-2 was determined that is approximately a factor of two lower than the accepted value of 9.81 m.s-2. The discrepancy can be attributed to the fact that as the ball rolls down the incline gravitational potential energy is converted not only into translational but also into rotational kinetic energy.

1. Introduction

Galileo Galilei was a seventeenth century Italian scientist who made many important observations in astronomy and mechanics [1]. His most famous experiment on the effects of gravity involved dropping weights from the tower of Pisa and showed that all bodies fall at the same rate independent of their mass. In the rolling ball experiment [2] in which a ball rolls down an incline, the effects of gravity are easier to quantify since the travel times are increased.

Using this experiment Galileo showed that: (i) the speed of the object at the bottom of the slope depends only on the height it has fallen through, (ii) that the speed of the object increases in proportion to the time it has traveled and (iii) for a given angle of slope, the vertical height fallen through is proportional to the square of the time it has travelled.

The experiment performed here was concerned only with the last statement.

2 Background Theory

A schematic of the experiment in which an object of mass m acted upon by gravity (acceleration due to gravity is g) on an incline is illustrated in figure 1 below.

168

Figure 1. Schematic of an object on an inclined plane. The plane is at an angle to the

horizontal and the force due to gravity acting down the slope is m.g.sin .

For an incline at an angle , although the force vertically downwards is m.g the force

parallel to the slope is m.g.sin . This is the force that accelerates the body down the slope the acceleration, a being given by:

sin.sin..

gm

gm

mass

forcea (1)

If the body starts at rest (initial velocity zero) and travels a distance d (for example to the bottom of the slope) the relationship between the time taken and distance travelled is given by the well known equation of motion:

2.2

1tad or 2.sin.

2

1tgd (2)

In addition, if h is the change in height the object undergoes by travelling a distance d down the slope then it is clear from figure 1 that:

d

hsin (3)

Note that as h is defined in figure 1 the object would start at the top of the slope. Substituting for d in equation 3 and rearranging gives:

hg

t .sin.

22

2

(4)

Equation 4 confirms Galileo’s third statement and indicates that a graph of the square of the travel time versus height should be a straight line that passes through the origin. In addition, if the angle of the slope is known the value of the gradient can be used to determine a value for the acceleration due to gravity.

This is the experiment that has been performed. Whilst Galileo performed the experiment for a range of slope angles, here only one has been used.

3. Description of the Experiment

m.g.sinθ

m.g h

d

169

The “slope” was provided by a right angled channel, held by a retort stand down which a ball bearing could roll. After fixing the slope its angle was found (by way of measuring its elevation and length) to be 3.52 +/- 0.03 degrees. The ball bearing was placed on the slope at a particular height and its time to travel down the slope was measured by hand with a stopwatch. The measurement was performed three times for each height and at eight different heights. One person released the ball at the set height and a second person timed the descent. The timing error for a single measurement was initially estimated to be +/-0.5 s however the spread of times found in the repeated measurements was usually only +/-0.1 s. The error in the release height of the ball bearing was +/- 1 mm. The range of heights used was 0.035 to 0.070 m resulting in travel times in the range ~1.9 to 2.9 s.

4. Results

A graph of the average squared travel time against release height is shown in figure 2. The data is a reasonable straight line with some scatter about the best fit line. By drawing best and worst possible fits by hand the gradient of the line was found to be 113 +/- 19 s2.m-1. These lines indicated that within errors the data is a straight line through the origin [3] as expected from equation 4 and indicating that any systematic errors are small compared to random errors.

Figure 2. Graph of the average of the travel times squared versus the release height. The straight line here is a computer generated best fit to the data3.

From the gradient and the angle of slope a value for the acceleration due to gravity, g, was determined (using equation 5) to be 4.76 +/- 0.12 m.s-2.

5. Discussion

Although the results of the experiment do show that for the single angle of slope used, the vertical height fallen through is proportional to the square of the time it has traveled, the derived value for g does not agree with the accepted value of 9.81 m.s-2 within graphical errors. The obtained value of g is approximately half of the expected value, whereas the error is only ~10%. The discrepancy is therefore much larger than can apparently be explained by random errors associated with the measurement and therefore needs to be considered further.

3

4

5

6

7

8

9

0.03 0.04 0.05 0.06 0.07 0.08

Height /m

Tim

e s

qu

are

d /

s2

170

The sources of measurement error include distances (for the height of release and the angle of the slope) and timing (for the travel time). Neither the meter rule nor the stopwatch are likely to have appreciable intrinsic errors associated with them. The use of the rule to determine heights and angles has relatively small errors as discussed above and no errors have been found in calculations. The estimated absolute timing error (+/- 0.5 s) arose from consideration of matching the start of the stopwatch with the release of the ball bearing and its stop with the ball reaching the bottom of the slope. The fact that this error appears significantly larger than the spread of travel times in repeated measurements (0.1 s) obtained from repeat measurements indicates that there may be a systematic error in starting and stopping the watch. However, a systematic error of up to +/- 0.5 s would do little to improve the agreement between the measured acceleration and g.

The explanation for the results obtained lies in the realization that whereas it is true that it is the translational acceleration down the slope that is measured by this experiment it is not true that the gravitational force acting down the slope is only converted into this form of motion. As the title of the experiment states the ball rolls down the hill implying that it has both translational and rotational motions. In other words the gravitational potential energy of the ball is converted into both translational and rotational kinetic energy. It should be possible to reanalyze the results here incorporating the effects of rotational motion but this is beyond the scope of this report.

6. Conclusions

Galileo’s rolling ball experiment has been performed in which the motion of a ball bearing down a shallow incline of angle 3.52 +/- 0.03 degrees. Assuming that gravitational potential energy is entirely converted to translational energy of the ball the value the value for g was determined to be g = 4.76 +/- 0.12 m.s-2. This value is approximately a factor of two lower than the expected value. The discrepancy is almost certainly mainly caused by the fact that gravitational potential energy is converted into rotational as well as translational kinetic energy as the ball rolls, rather than slides, down the hill.

References

[1]. “Galileo’s physical measurements” Stillman Drake, Am.J.Phys 54 (1986) 302-306.

[2]. Experiment G2 (Gallileo’s Rolling Ball Experiment) in Preliminary/Foundation Year Laboratory Course Booklet (2006_7).

[3]. The computer generated best fit gave a gradient of 127 s2.m-1.

Aside: This value is at the high end of the values quoted in the text. Looking more closely it appears almost certain that the student forced the best fit line to go through the origin. This was a (commonly made) mistake. To do this the student has assumed not that t2 = 0, h = 0 is an experimental point but that it is a point known with absolute certainty. While this may at first seem reasonable, after all the time taken to change height by zero amount will take zero time, the trouble is that hides the effects of any systematic errors from the data analysis. For example, it is quite feasible that a systematic error could have been made in measuring the release height or in timing the motion. This might result in a straight line that does not go through the origin, but, a perfectly valid gradient. The result is that the student has both hidden any systematic errors and introduced an error into the gradient and consequently into the calculated value of g. If the student had spotted the error it would not be valid to present erroneous results, the

171

data would need to be reanalyzed. However, giving the benefit of (a very small doubt) this report has been written using the assumption that the student did not force the best fit line through the origin and should be read with this in mind.

4. Report writing. The style is intended to be very similar to that of a paper presented to a scientific journal but the level at which it is written should be such that another student with a similar background but unfamiliar with the experiment would be able to understand what you have done, why and what it all means. Reports are separated into sections the expected contents of which are described below. This is followed by some general advice and comments on changing expectations through the undergraduate course.

4.1 Contents of the different sections of a scientific report

Abstract

This summarizes the experiment in a single paragraph in ~150 words, featuring particularly the (numerical) results and principal conclusions. It is entirely separate from the rest of the report, hence concepts introduced in the abstract need to be introduced again in the main part of the report.

1. Introduction Describes the background to, and aim(s) of, the experiment and whatever theoretical background is needed to make sense of your own work being presented.

There is an expectation that the student reads around the subject before writing the report. This should be reflected in “Introductory”/”Theory” sections that are not solely derived from the laboratory handbooks. The source material for this should be quoted and obviously re-written to fit in with the requirements of the report and to avoid plagiarism. At the same time the “Introductory”/”Theory” sections should be appropriate for the report and not overwhelm it.

If necessary, for example if the introduction becomes large and difficult to read, the section can be split in order to have a distinct "Background Theory" section following on from the more general introduction. Unfamiliar/obscure derivations may be included but exclude trivial steps. The theory section may include a number of equations. These should be on a separate line, numbered and each of the symbols used should be explained the first time they appear, e.g.:

E = mc2 (1)

where E is energy (J), m is mass (kg) and c is the speed of light (ms-1).

2. Description of the experiment and 3. Results

These sections are very flexible and tend to cause the most trouble for students in years 0,1 and 2.

There should be descriptions of the main features of the equipment and general descriptions of how it was set up and used. These should be written in paragraph rather than point form, should not be in the form of lists and should not be an instruction set for the experiment. Greater detail should be included where non-standard/unfamiliar equipment has been used, where subjective interpretations or procedures were employed or where significant or systematic errors or uncertainties may have occurred.

If only one experiment was performed the logical flow of the report is clear. However, if the experiment had two or more parts then things can get complicated. Many students fall

172

into the trap of separating important procedural information from results: e.g presenting procedure 1, procedure 2, results 1 and then results 2 etc. Reports using this format are very difficult to read.

Much better is: procedure 1, results 1, procedure 2, results 2 etc. A question to consider then is how much common experimental information can be placed upfront before getting deeply into the experiments?

Large amounts of data are usually best presented in either tabular or graphical form, choose the most appropriate (but usually not both forms). Diagrams and graphs should be labeled: Figure 1, Figure 2 etc underneath the figure (see example above) and tables as Table 1, Table 2 etc above the table (se example below) and all should have an explanatory title.

Explain how the original data were analyzed, for example indicate whether a value is the average of a number of measurements and/or refer (by number) to the mathematical equations used (see notes below). However, the actual mathematical working should not be included. Graphs should show the best fit straight line (but not the error fits) if applicable and numerical values should always be quoted with their associated errors. Again, do not show the mathematical working used to obtain errors.

4. Discussion

The discussion section is very important in that it both brings together the previous sections and is the point at which students can demonstrate “critical awareness” through interpretation of the meaning of the previously described results.

Other items that might be discussed are: consistency of readings, accuracy, limitations of apparatus or measurements, suggestions for improvements of apparatus, comparison of results obtained by different methods, comparison with theoretical behaviour or accepted values, unexpected behaviour, future work. However it is clear that some of these are experimental considerations that could equally well be placed in the previous sections in the case of a complicated/multi-experiment report.

5. Conclusions

Reports should end with a conclusions section. These should summarize the main results and findings.

6. References

References should be numbered and placed in the correct order in the text (i.e. the Vancouver system). They can be denoted by a superscript1 in square brackets [1] or by other (logical) systems.

The procedure can be stated in words in the following way:

At the point in the report at which it is necessary to make the reference insert a number in square brackets, e.g. [1], the numbers should start with [1] and be in the order in which they appear in the report.

At the end of the report in the section headed “References” the full reference is given as follows: In the case of a book: Author list, title, publisher, place published, year and if relevant, page number. e.g. [1] H.D. Young, R.A. Freedman, University Physics, Pearson, San Francisco, 2004.

173

In the case of a journal paper: Author list, title of article, journal title, vol no., page no.s, year. e.g. [2] M.S. Bigelow, N.N. Lepeshkin & R.W. Boyd, “Ultra-slow and superluminal light propagation in solids at room temperature”, Journal of Physics: Condensed Matter, 16, pp.1321-1340, 2004.

In the case of a webpage (note: use carefully as information is sometimes incorrect): Title, institution responsible, web address, date accessed. e.g. [3] “How Hearing Works”, HowStuffWorks inc., http://science.howstuffworks.com/hearing.htm, accessed 13th July 2005

Different publications are likely to insist on one particular system (e.g. Vancouver as done here or Harvard – authors name and year of publication in text). Lecturing staff may express a preference.

Appendices This section is not compulsory but can be used to provide information that doesn’t fit into or is not vital to the report but the author still wants or needs to present (possibly as evidence of work carried out). The main text should reference the appendix but it should not be necessary for the reader to read the appendix to understand the report. Examples of material included in appendices include: long, non-standard derivations, computer code, the authors detailed designs for apparatus, results not included in the report and risk assessments (if required). The appendix should include sufficient explanation to make sense of this extra information. Appendices are not usually necessary for year 0,1 and 2 reports but are more common in years 3 and 4 because of the desire to demonstrate project work.

4.2 General advice The report should be written in your own words, i.e. do not plagiarize other peoples

work (including laboratory books, other student’s reports, the web or textbooks). Apart from the abstract and conclusions there should be little repetition in reports. The past tense is most appropriate and the most commonly used. The report should be impersonal (avoid “I”, “we”, “you” etc). A well-labelled diagram can be more informative than several paragraphs of prose. All diagrams, pictures, graphs and figures should be labelled figure 1, figure 2 etc in

the order they appear and should have a descriptive figure caption. Tables should be labelled as table 1, table 2 etc in the order they appear and have a

descriptive table caption. Readers will naturally work through the text of the report. This text should therefore

refer to and explain figures, tables equations etc when appropriate. For example, “Figure x shows…….”.

Related to the last point figures and tables should appear at an appropriate place in the text and be of an appropriate size. The electronic generation of reports means that there should be no need for full page hand drawn graphs (allow these are still allowed at Year 0 level).

It is not necessary to include a risk assessment with your final report, the purpose of that was to ensure your safety when you performed the experiment. However, it may be required as part of longer reports in the third or fourth years in which case it should present in an appendix as proof of its existence.

Pages should be numbered and longer reports (3rd and 4th year project reports) should have a contents page.

174

4.3 Differentiation between years

1. Style In essence very little changes of style are expected through the academic years. The aim is to instill the scientific style of writing from the beginning. Such changes that do occur reflect the changing content of the report and the audience (reader).

2. Length of reports

Typical report lengths are shown in table 1 for different student years.

Table 1. Typical lengths of reports (pages assumed to be typed and to include diagrams and tables)

Student Year Typical word length 0 (1500-2000) 1 (2000-3000) 2 (2000-3000)

3 (interim) 3 (final)

(~3000) (up to 6000)

4 (interim) 4 (final)

(~3000) (up to 6000)

3. Scientific content

Experiments in years 0 and 1 are highly prescriptive with well defined aims. In year 2 some of the experiments are likely to allow genuine student enquiry. In years 3 and 4 the two semester projects are open ended, student led and with undetermined outcomes. At the same time the techniques will likely become more sophisticated, the physics more advanced (and distinct from taught modules) and the results more numerous.

Early years reports will inevitably be heavily influenced by the laboratory books provided. Third and fourth year reports will have no such guidance to fall back on and 2nd year reports sit somewhere in between.

Early reports may use laboratory books and text books as reference sources whereas 3rd and 4th year reports should make increasingly extensive references to research papers.

Since longer reports are expected in the 3rd and 4th years the style is perhaps less similar to scientific papers and more so towards a Masters or Ph.D thesis. Ultimately though it remains “scientific”.

175

DIARY (LAB BOOK) CHECKLIST (also see page 10)

Date Experiment Title and Number Risk Analysis Brief Introduction Brief description of what you did and how you did it Results (indicating errors in readings) Graphs (where applicable) Error calculations Final statement of results with errors Discussion/Conclusion (including a comparison with accepted results if applicable)

FORMAL REPORT CHECKLIST ( also see page 15 )

Date Experiment Title and Number Abstract Introduction Method Results: Use graphs – and don’t forget to describe them. Indication of how errors were determined Final results with errors Discussion Conclusion (including a comparison with accepted results if applicable) Use Appendices if necessary A risk assessment is unnecessary.