## This Paper Argues That The Expanding Scope Of computing Science’ Makes It Difficult To Sustain Traditional Scientific And Engineering Models Of Research

See also What is a PhD in HCI? As well as Aaron Sloman’s Notes on Presenting Theses. . What is Research in Computing Science? Chris Johnson Glasgow Interactive Systems Group (GIST), Department of Computer Science, Glasgow University, Glasgow, G12 8QQ. Tel: +44 141 330 6053 Fax: +44 141 330 4913 EMail:johnson@dcs.gla.ac.uk This paper argues that the expanding scope of computing science’ makes it difficult to sustain traditional scientific and engineering models of research. In particular, recent work in formal methods has abandoned the traditional empirical methods. Similarly, research in requirements engineering and human computer interaction has challenged the proponents of formal methods. These tensions stem from the fact that Computing Science’ is a misnoma. Topics that are currently considered part of the discipline of computing science are technology rather than theory driven. This creates problems if academic departments are to impose scientific criteria during the assessment of PhDs. It is, therefore, important that people ask themselves What is Research in Computing Science’ before starting on a higher degree. This paper is intended as a high level introduction for first year research students or students on an advanced MSc course. It should be read in conjunction with Basic Research Skills in Computing Science Keywords: research skills, computing science. 1. Introduction Good research practice suggests that we should begin by defining our terms. The Oxford Concise dictionary defines research as: • research. 1.a. the systematic investigation into and study of materials, sources, etc, in order to establish facts and reach new conclusions. b. an endeavour to discover new or collate old facts etc by the scientific study of a subject or by a course of critical investigation. This definition is useful because it immediately focuses upon the systematic nature of research. In other words, the very meaning of the term implies a research method. These methods or systems essentially provide a model or structure for logical argument. 1.1 The Dialectic of Research The highest level of logical argument can be seen in the structure of debate within a particular field. Each contribution to that debate falls into one of three categories: • thesis This presents the original statement of an idea. However, very few research contributions can claim total originality. Most borrow ideas from previous work, even if that research has been conducted in another discipline. • antithesis This presents an argument to challenge a previous thesis. Typically, this argument may draw upon new sources...

## Thermoelectric Coolers Also Called Thermoelectricmodules Or Peltier Coolers

THERMOELECTRIC COOLER . THERMOELECTRIC COOLERS ABSTRACT Thermoelectric coolers also called thermoelectric modules or Peltier coolers. They are semiconductor based electronic components that function as small heat pumps. On applying a low DC voltage to the thermoelectric module, heat moves through the module from one side to the other. One face of the module is therefore cooled, while the other face simultaneously heats up. This phenomenon can also be reversed by changing the polarity of the applied DC voltage causing the heat to flow in the opposite direction. Thereby, a thermoelectric module may be used for both heating and cooling which makes it highly suitable for precise temperature control applications. Conventional coolers used in homes and industries depend on refrigerants such as hydro fluoro carbon, which is a threat to the ozone layer. On the other hand, thermoelectric coolers are environment friendly, compact and affordable. Thermoelectric coolers are semiconductor based devices. They have got several advantages like solid construction, quiet & reliable operation, no CFC’s, precise temperature control. A thermoelectric cooler permits lowering the temperature of an object below ambient as well as maintaining the temperature of objects above ambient temperatures. Thermoelectric coolers can be used for applications that require heat removal ranging from milli-watts up to several thousand watts. Therefore they are used for the most demanding industries such as medical, laboratory, aerospace, semiconductor, telecom, industrial, and consumer. Uses range from simple food and beverage coolers for an afternoon picnic to extremely sophisticated temperature control systems in missiles and space vehicles. A thermoelectric cooler  provide a solution that is smaller, weighs less, and is more reliable than a comparatively small compressor system. It offers a convenient earth friendly alternative. Researchers are working on improving the efficiency of thermoelectric devices, reducing their cost and increasing their applications. . . Submitted By:-                 Kapil Agrawal                 Regd NO- 1001217041                 Electrical Engg. . . . . . . . 1Introduction Although the principle of thermoelectricity dates back to the discovery of the Peltier effect in 1834[1], there was little practical application of the phenomenon until the middle 1950s. Prior to then, the poor thermoelectric properties of known materials made them unsuitable for use in a practical refrigerating device. It is only from the mid-1950s at which the major thermoelectric material design approach was introduced by A.V.Ioffe, leading to the inventory of semiconducting compounds such as Bi2Te3, which is currently used in thermoelectric coolers. These materials made possible the development of practical thermoelectric devices for...

## Thequantile-quantile Or Q-q Plot Is An Exploratory Graphical Device Used To Check The Validity Of A Distributional Assumption For A Data Set

No video available for this section. Quantile-Quantile (q-q) Plots Author(s) David Scott Prerequisites Histograms, Distributions, Percentiles, Describing Bivariate Data, Normal Distributions Learning Objectives 1.State what q-q plots are used for. 2.Describe the shape of a q-q plot when the distributional assumption is met. 3.Be able to create a normal q-q plot. Introduction The quantile-quantile or q-q plot is an exploratory graphical device used to check the validity of a distributional assumption for a data set. In general, the basic idea is to compute the theoretically expected value for each data point based on the distribution in question. If the data indeed follow the assumed distribution, then the points on the q-q plot will fall approximately on a straight line. Before delving into the details of q-q plots, we first describe two related graphical methods for assessing distributional assumptions: the histogram and the cumulative distribution function (CDF). As will be seen, q-q plots are more general than these alternatives. Assessing Distributional Assumptions As an example, consider data measured from a physical device such as the spinner depicted in Figure 1. The red arrow is spun around the center, and when the arrow stops spinning, the number between 0 and 1 is recorded. Can we determine if the spinner is fair? Figure 1. A physical device that gives samples from a uniform distribution. If the spinner is fair, then these numbers should follow a uniform distribution. To investigate whether the spinner is fair, spin the arrow n times, and record the measurements by {μ1, μ2, …, μn}. In this example, we collect n = 100 samples. The histogram provides a useful visualization of these data. In Figure 2, we display three different histograms on a probability scale. The histogram should be flat for a uniform sample, but the visual perception varies depending on whether the histogram has 10, 5, or 3 bins. The last histogram looks flat, but the other two histograms are not obviously flat. It is not clear which histogram we should base our conclusion on. Figure 2. Three histograms of a sample of 100 uniform points. Alternatively, we might use the cumulative distribution function (CDF), which is denoted by F(μ). The CDF gives the probability that the spinner gives a value less than or equal to μ, that is, the probability that the red arrow lands in the interval [0, μ]. By simple arithmetic, F(μ) = μ, which is...

## Strength Reduction Factors In Performance-based Design

nisee National Information Service for Earthquake Engineering University of California, Berkeley . Strength Reduction Factors in Performance-Based Design By Eduardo Miranda National Center for Disaster Prevention (CENAPRED) A.V. Delfin Madrigal 665, 04360 Mexico, D.F., Mexico Note: this paper was presented at the EERC-CUREe Symposium in Honor of Vitelmo V. Bertero, January 31 – February 1, 1997, Berkeley, California.  SUMMARY Strength reduction factors that are used to reduce design forces in earthquake resistant design are discussed. Based on recent research, the paper presents the different components of the so called R factors and discusses how these can be incorporated into a performance-based earthquake resistant design. The first component discussed is the reduction in lateral strength demand produced by nonlinear behavior in the structure which takes into account the hysteretic energy dissipation capacity of the structure. The paper presents first a summary and comparison of recent statistical studies on strength reduction factors computed for single-degree-of-freedom systems undergoing different levels of inelastic deformation when subjected to a large number of recorded earthquake ground motions. Despite having used significantly different ground motions data bases, results from various studies are remarkably similar. The main parameters that affect the amplitude of strength reductions are discussed. The evaluation of the results indicates that strength reductions due to nonlinear behavior are primarily influenced by the maximum tolerable displacement ductility demand, the period of the system and the soil conditions at the site. Based on these parameters simplified expressions that can be used in codes are presented. The paper then describes how strength reduction factors derived from single-degree-of-freedom systems need to be modified in order to be used in the design of multi -degree-of-freedom systems. Reductions in design forces due to overstrength are discussed. These reductions are due to the fact that the lateral strength of a structure is typically higher and in some case much higher than the nominal strength capacity of the structure. These reductions can be divided to take into account the additional strength from the nominal strength to the formation of the first plastic hinge and the additional strength from this point to the formation a mechanism. Finally, the paper discusses how these reductions factors can be implemented in performance-based design. INTRODUCTION Design lateral strengths prescribed in earthquake-resistant design provisions are typically lower and in some cases much lower than the lateral strength required to maintain a structure in the elastic range in the event...