#### • Class 11 Physics Demo

Explore Related Concepts

# difference between alpha and beta testing

From Wikipedia

Beta (finance)

In finance, the Beta (Î²) of a stock or portfolio is a number describing the relation of its returns with that of the financial market as a whole.

An asset has a Beta of zero if its returns change independently of changes in the market's returns. A positive beta means that the asset's returns generally follow the market's returns, in the sense that they both tend to be above their respective averages together, or both tend to be below their respective averages together. A negative beta means that the asset's returns generally move opposite the market's returns: one will tend to be above its average when the other is below its average.

The beta coefficient is a key parameter in the capital asset pricing model (CAPM). It measures the part of the asset's statistical variance that cannot be removed by the diversification provided by the portfolio of many risky assets, because of the correlation of its returns with the returns of the other assets that are in the portfolio. Beta can be estimated for individual companies using regression analysis against a stock market index.

## Definition

The formula for the beta of an asset within a portfolio is

\beta_a = \frac {\mathrm{Cov}(r_a,r_p)}{\mathrm{Var}(r_p)} ,

where ra measures the rate of return of the asset, rp measures the rate of return of the portfolio, and cov(ra,rp) is the covariance between the rates of return. The portfolio of interest in the CAPM formulation is the market portfolio that contains all risky assets, and so the rp terms in the formula are replaced by rm, the rate of return of the market.

Beta is also referred to as financialelasticityor correlated relativevolatility, and can be referred to as a measure of the sensitivity of the asset's returns to market returns, its non-diversifiable risk, its systematic risk, or market risk. On an individual asset level, measuring beta can give clues to volatility and liquidity in the marketplace. In fund management, measuring beta is thought to separate a manager's skill from his or her willingness to take risk.

The beta coefficient was born out of linear regression analysis. It is linked to a regression analysis of the returns of a portfolio (such as a stock index) (x-axis) in a specific period versus the returns of an individual asset (y-axis) in a specific year. The regression line is then called the Security characteristic Line (SCL).

SCL : r_{a,t} = \alpha_a + \beta_a r_{m,t} + \epsilon_{a,t} \frac{}{}

\alpha_a is called the asset's alpha and \beta_a is called the asset's beta coefficient. Both coefficients have an important role in Modern portfolio theory.

For an example, in a year where the broad market or benchmark index returns 25% above the risk free rate, suppose two managers gain 50% above the risk free rate. Because this higher return is theoretically possible merely by taking a leveraged position in the broad market to double the beta so it is exactly 2.0, we would expect a skilled portfolio manager to have built the outperforming portfolio with a beta somewhat less than 2, such that the excess return not explained by the beta is positive. If one of the managers' portfolios has an average beta of 3.0, and the other's has a beta of only 1.5, then the CAPM simply states that the extra return of the first manager is not sufficient to compensate us for that manager's risk, whereas the second manager has done more than expected given the risk. Whether investors can expect the second manager to duplicate that performance in future periods is of course a different question.

### Security market line

The SML graphs the results from the capital asset pricing model (CAPM) formula. The x-axis represents the risk (beta), and the y-axis represents the expected return. The market risk premium is determined from the slope of the SML.

The relationship between Î² and required return is plotted on the security market line (SML) which shows expected return as a function of Î². The intercept is the nominal risk-free rate available for the market, while the slope is E(Rm)&minus;&nbsp;Rf. The security market line can be regarded as representing a single-factor model of the asset price, where Beta is exposure to changes in value of the Market. The equation of the SML is thus:

\mathrm{SML}: E(R_i) - R_f = \beta_i (E(R_M) - R_f).~

It is a useful tool in determining if an asset being considered for a portfolio offers a reasonable expected return for risk. Individual securities are plotted on the SML graph. If the security's risk versus expected return is plotted above the SML, it is undervalued because the investor can expect a greater return for the inherent risk. A security plotted below the SML is overvalued because the investor would be accepting a lower return for the amount of risk assumed.

## Beta, volatility and correlation

A misconception about beta is that it measures the volatility of a security relative to the volatility of the market. If this were true, then a security with a beta of 1 would have the same volatility of returns as the volatility of market returns. In fact, this is not the case, because beta also incorporates the correlation of returns between the security and the market. The formula relating beta, relative volatility (sigma) and correlation of returns is:

\beta = (\sigma / \sigma_m) r\,

For example, if one stock has low volatility and high correlation, and the other stock has low correlation and high volatility, beta cannot decide which is more "risky".

This also leads to an inequality (because |r| is not greater than one):

\sigma \ge |\bet

Beta particle

Beta particles are high-energy, high-speed electrons or positrons emitted by certain types of radioactivenuclei such as potassium-40. The beta particles emitted are a form of ionizing radiation also known as beta rays. The production of beta particles is termed beta decay. They are designated by the Greek letter beta (Î²). There are two forms of beta decay, Î²âˆ’ and Î²+, which respectively give rise to the electron and the positron.

## Î²âˆ’ decay (electron emission)

An unstable atomic nucleus with an excess of neutrons may undergo Î²âˆ’ decay, where a neutron is converted into a proton, an electron and an electron-type antineutrino (the antiparticle of the neutrino):

â†’ + +

This process is mediated by the weak interaction. The neutron turns into a proton through the emission of a virtualWâˆ’ boson. At the quark level, Wâˆ’ emission turns a down-type quark into an up-type quark, turning a neutron (one up quark and two down quarks) into a proton (two up quarks and one down quark). The virtual Wâˆ’ boson then decays into an electron and an antineutrino.

Beta decay commonly occurs among the neutron-rich fission byproducts produced in nuclear reactors. Free neutrons also decay via this process. This is the source of the copious amount of electron antineutrinos produced by fission reactors.

## Î²+ decay (positron emission)

Unstable atomic nuclei with an excess of protons may undergo Î²+ decay, also called positron decay, where a proton is converted into a neutron, a positron and an electron-type neutrino:

â†’ + +

Beta plus decay can only happen inside nuclei when the absolute value of the binding energy of the daughter nucleus is higher than that of the mother nucleus.

## Interaction with other matter

Of the three common types of radiation given off by radioactive materials, alpha, beta and gamma, beta has the medium penetrating power and the medium ionising power. Although the beta particles given off by different radioactive materials vary in energy, most beta particles can be stopped by a few millimeters of aluminum. Being composed of charged particles, beta radiation is more strongly ionising than gamma radiation. When passing through matter, a beta particle is decelerated by electromagnetic interactions and may give off bremsstrahlungx-rays.

## Uses

Beta particles can be used to treat health conditions such as eye and bone cancer, and are also used as tracers. Strontium-90 is the material most commonly used to produce beta particles. Beta particles are also used in quality control to test the thickness of an item, such as paper, coming through a system of rollers. Some of the beta radiation is absorbed while passing through the product. If the product is made too thick or thin, a correspondingly different amount of radiation will be absorbed. A computer program monitoring the quality of the manufactured paper will then move the rollers to change the thickness of the final product. The well-known 'betalight' contains tritium and a phosphor.

Beta plus(or positron) decay of a radioactivetracerisotope is the source of the positrons used in positron emission tomography (PET scan).

## History

Henri Becquerel, while experimenting with fluorescence, accidentally found out that Uranium exposed a black paper wrapped photographic plate with some unknown radiation that could not be turned off like X-rays. Ernest Rutherford continued these experiments and discovered two different kinds of radiation:

• alpha particles that did not show up on the Becquerel plates because they were easily absorbed by the black wrapping paper
• beta particles which are 100 times more penetrating than alpha particles.

He published his results in 1897.

## Health

Beta particles are able to penetrate living matter to a certain extent and can change the structure of struck molecules. In most cases such change can be considered as damage with results possibly as severe as cancer and death. If the struck molecule is DNA it can show a spontaneous mutation.

Beta sources can be used in radiation therapy to kill cancer cells.

## Future use

One can envisage betavoltaic cells to supply p

Software testing

Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. Software testing also provides an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs.

Software testing can also be stated as the process of validating and verifying that a software program/application/product:

1. meets the business and technical requirements that guided its design and development;
2. works as expected; and
3. can be implemented with the same characteristics.

Software testing, depending on the testing method employed, can be implemented at any time in the development process. However, most of the test effort occurs after the requirements have been defined and the coding process has been completed. As such, the methodology of the test is governed by the software development methodology adopted.

Different software development models will focus the test effort at different points in the development process. Newer development models, such as Agile, often employ test driven development and place an increased portion of the testing in the hands of the developer, before it reaches a formal team of testers. In a more traditional model, most of the test execution occurs after the requirements have been defined and the coding process has been completed.

## Overview

Testing can never completely identify all the defects within software. Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against oracles&mdash;principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts, comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria.

Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing is the process of attempting to make this assessment.

A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy \$59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.

## History

The separation of debugging from testing was initially introduced by Glenford J. Myers in 1979. Although his attention was on breakage testing ("a successful test is one that finds a bug") it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Dave Gelperin and William C. Hetzel classified in 1988 the phases and goals in software testing in the following stages:

• Until 1956 - Debugging oriented
• 1957â€“1978 - Demonstration oriented
• 1979â€“1982 - Destruction oriented
• 1983â€“1987 - Evaluation oriented
• 1988â€“2000 - Prevention oriented

## Software testing topics

### Scope

A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. This is a non-trivial pursuit. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions. The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.

### Functional vs non-functional testing

Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work".

Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or security. Non-functional testing tends to answer such questions as "how many people can log in at once".

### Defects and failures

Not all software defects are caused by coding errors. One common source of expensive defects is caused by requirement gaps, e.g., unrecognized requirements, that result in errors of omission by the program designer. A common source of requirements gaps is non-functional requirements such as testability, scalability, maintainability, usability, performance, and security.

Software faults occur through the following processes. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure. Not all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new hardware platform, alterations in source data or interacting with different software. A single defect may result in a wide range of failure symptoms.

### Finding faults early

It is commonly believed that the earlier a defect is found the cheaper i

Alpha helix

A common motif in the secondary structure of proteins, the alpha helix (Î±-helix) is a right-handed coiled or spiral conformation, in which every backbone N-H group donates a hydrogen bond to the backbone C=O group of the amino acid four residues earlier (i+4 \rightarrow i hydrogen bonding). This secondary structure is also sometimes called a classic Pauling-Corey-Branson alpha helix (see below). Among types of local structure in proteins, the Î±-helix is the most regular and the most predictable from sequence, as well as the most prevalent.

## Historical development

In the early 1930s, William Astbury showed that there were drastic changes in the X-rayfiber diffraction of moist wool or hair fibers upon significant stretching. The data suggested that the unstretched fibers had a coiled molecular structure with a characteristic repeat of ~5.1|Ã…|nm.

Astbury initially proposed a kinked-chain structure for the fibers. He later joined other researchers (notably the American chemist Maurice Huggins) in proposing that:

• the unstretched protein molecules formed a helix (which he called the Î±-form); and
• the stretching caused the helix to uncoil, forming an extended state (which he called the Î²-form).

Although incorrect in their details, Astbury's models of these forms were correct in essence and correspond to modern elements of secondary structure, the Î±-helix and the Î²-strand (Astbury's nomenclature was kept), which were developed by Linus Pauling, Robert Corey and Herman Branson in 1951 (see below); that paper showed both right- and left-handed helices, although in 1960 the crystal structure of myoglobin showed that the right-handed form is the common one. Hans Neurath was the first to show that Astbury's models could not be correct in detail, because they involved clashes of atoms. It is interesting to note that Neurath's paper and Astbury's data inspired H. S. Taylor, Maurice Huggins and Bragg and collaborators to propose models of keratin that somewhat resemble the modern Î±-helix.

Two key developments in the modeling of the modern Î±-helix were (1) the correct bond geometry, thanks to the crystal structure determinations of amino acids and peptides and Pauling's prediction of planarpeptide bonds; and (2) his relinquishing of the assumption of an integral number of residues per turn of the helix. The pivotal moment came in the early spring of 1948, when Pauling caught a cold and went to bed. Being bored, he drew a polypeptide chain of roughly correct dimensions on a strip of paper and folded it into a helix, being careful to maintain the planar peptide bonds. After a few attempts, he produced a model with physically plausible hydrogen bonds. Pauling then worked with Corey and Branson to confirm his model before publication. In 1954 Pauling was awarded his first Nobel Prize "for his research into the nature of the chemical bond and its application to the elucidation of the structure of complex substances"[http://nobelprize.org/nobel_prizes/chemistry/laureates/1954/] (such as proteins), prominently including the structure of the Î±-helix.

## Structure

### Geometry and hydrogen bonding

The amino acids in an Î± helix are arranged in a right-handed helical structure where each amino acid residue corresponds to a 100Â° turn in the helix (i.e., the helix has 3.6 residues per turn), and a translation of 1.5|Ã…|nm|abbr=on along the helical axis. (Short pieces of left-handed helix sometimes occur with a large content of achiral glycine amino acids, but are unfavorable for the other normal, biological D-amino acids.) The pitch of the alpha-helix (the vertical distance between two consecutive turns of the helix) is 5.4|Ã…|nm|abbr=on which is the product of 1.5 and 3.6. What is most important is that the N-H group of an amino acid forms a hydrogen bond with the C=O group of the amino acid four residues earlier; this repeated i+4 \rightarrow i hydrogen bonding is the most prominent characteristic of an Î±-helix. Official international nomenclature [http://www.chem.qmul.ac.uk/iupac/misc/ppep1.html] specifies two ways of defining Î±-helices, rule 6.2 in terms of repeating Ï†,Ïˆ torsion angles (see below) and rule 6.3 in terms of the combined pattern of pitch and hydrogen bonding. The alpha-helices can be identified in protein structure using several computational methods, one of which is DSSP (Dictionary of Protein Secondary Structure).

Similar structures include the 310 helix (i+3 \rightarrow i hydrogen bonding) and the Ï€-helix (i+5 \rightarrow i hydrogen bonding). The Î± helix can be described as a 3.613 helix, since the i + 4 spacing adds 3 more atoms to the H-bonded loop compared to the tighter 310 helix. The subscripts refer to the number of atoms (including the hydrogen) in the closed loop formed by the hydrogen bond.

Residues in Î±-helices typically adopt backbone (Ï†, Ïˆ) dihedral angles around (-60Â°, -45Â°), as shown in the image at right. In more general terms, they adopt dihedral angles such that the Ïˆ dihedral angle of one residue and the Ï† dihedral angle of the next residue sum to roughly -105Â°. As a consequence, Î±-helical dihedral angles, in general, fall on a diagonal stripe on the Ramachandran diagram (of slope -1), ranging from (-90Â°, -15Â

Question:this is what i have 1) Alpha is positive charge,Beta is negative,Gamma no charge 2) Alpha is weak,Beta is stronger than alpha,Gamma is the strongest 3) Alpha has helium particles with atomic number 2 &atomic mass 4,Beta has electrons with atomic number -1 and atomic mass 0,Gamma is electromagnetic radiations Plz i just need 2 more if any1 knows

Answers:4. Alpha particles are blocked by a couple of sheets of paper, or an inch or air, while beta particles are blocked by wood, plastic and especially metals. Gamma rays require several feet of concrete or several inches of lead to be blocked. 5. Alpha particles are heaviest, beta lightest and gamma has no mass.

Question:Wikipedia gives this picture about cellulose: http://en.wikipedia.org/wiki/Image:Cellulose-2D-skeletal.svg , while this of maltose: http://en.wikipedia.org/wiki/Image:Maltose_structure.svg . These pictures however do not show me the difference. In my opinion, a clearer picture about maltose is this: http://sci-toys.com/ingredients/maltose_2.gif Similarly, for cellulose: open http://www.greenspirit.org.uk/resources/cellulose.gif It seems that the beta1,4 bond makes cellulose a polymer, while alpha1,4 bond maltose a molecule.

Answers:Yes they do. The difference is in the orientation about the bonds. Look at the oxygen connecting the rings. If you look closely at the rest, you'll find the remaining differences.

Question:Beta radiation has a longer range in air than alpha particles. one of the reasons is that: (i took two out, because they weren't the answer) - less highly ionising than alpha due to its larger electric charge. - less highly ionising than alpha due to its smaller electric charge. which one is right?... i know gamma doesn't have an electric charge. but which one, alpha or beta, has the bigger electric charge? Thankyou, =]

Answers:Neither answer. Alpha particles have a comparatively large mass (He nucleus, 2 protons + 2 neutrons) and when they bump into other atoms of similar size they lose a lot of their energy. So although alpha particles are very destructive due to their large size, they don't normally present too much danger because they get stopped very fast. (Unless of course someone injects a source of alpha particles directly into your bloodstream, like they did a while ago to that Russian agent in London, who died after 3 weeks or so of agony.) Beta particles are essentially electrons. They are tiny and can fly between the atoms very easily: also when they hit a large nucleus they tend to bounce off without losing much energy, instead of colliding as in a head-on car collision, as alpha particles do. But beta particles don't have a lot of energy. Gamma rays are very dangerous because they are a form of ultra-high-frequency electromagnetic radiation and can travel very far before releasing their energy, even through fairly dense objects, and they have a high energy level.

Question:How about for the letters, what are their codes..for E_F_G_H_I_K_L....M...N...so on..?

Answers:AAlpha BBravo CCharlie DDelta EEcho FFoxtrot GGolf HHotel IIndia JJuliet KKilo LLima MMike NNovember OOscar PPapa QQuebec RRomeo SSierra TTango UUniform VVictor WWhiskey XX-ray YYankee ZZulu Each word is suppose to sound different from any other in order to make it clearer when spelling words over a bad signal