Explore Related Concepts

example of tyndall effect

Best Results From Wikipedia Yahoo Answers Youtube


From Wikipedia

Algorithm examples

This article 'Algorithm examples supplementsAlgorithm and Algorithm characterizations.

An example: Algorithm specification of addition m+n

Choice of machine model:

There is no “best�, or “preferred� model. The Turing machine, while considered the standard, is notoriously awkward to use. And different problems seem to require different models to study them. Many researchers have observed these problems, for example:

“The principal purpose of this paper is to offer a theory which is closely related to Turing's but is more economical in the basic operations� (Wang (1954) p. 63)
“Certain features of Turing machines have induced later workers to propose alternative devices as embodiments of what is to be meant by effective computability.... a Turing machine has a certain opacity, its workings are known rather than seen. Further a Turing machine is inflexible ... a Turing machine is slow in (hypothetical) operation and, usually complicated. This makes it rather hard to design it, and even harder to investigate such matters as time or storage optimization or a comparison between efficiency of two algorithms.� (Melzak (1961) p. 281)
Shepherdson-Sturgis (1963) proposed their register-machine model because “these proofs [using Turing machines] are complicated and tedious to follow for two reasons: (1) A Turing machine has only one head... (2) It has only one tape....� They were in search of “a form of idealized computer which is sufficiently flexible for one to be able to convert an intuitive computational procedure into a program for such a machine� (p. 218).
“I would prefer something along the lines of the random access computers of Angluin and Valiant [as opposed to the pointer machine of Schönhage]� (Gurivich 1988 p. 6)
“Showing that a function is Turing computable directly...is rather laborious ... we introduce an ostensibly more flexible kind of idealized machine, an abacus machine...� (Boolos-Burgess-Jeffrey 2002 p.45).

About all that one can insist upon is that the algorithm-writer specify in exacting detail (i) the machine model to be used and (ii) its instruction set.

Atomization of the instruction set:

The Turing machine model is primitive, but not as primitive as it can be. As noted in the above quotes this is a source of concern when studying complexity and equivalence of algorithms. Although the observations quoted below concern the Random access machine model – a Turing-machine equivalent – the problem remains for any Turing-equivalent model:

“...there hardly exists such a thing as an ‘innocent’ extension of the standard RAM model in the uniform time measure; either one only has additive arithmetic, or one might as well include all multiplicative and/or bitwise Boolean instructions on small operands....� (van Emde Boas (1992) p. 26)
“Since, however, the computational power of a RAM model seems to depend rather sensitively on the scope of its instruction set, we nevertheless will have to go into detail...
“One important principle will be to admit only such instructions which can be said to be of an atomistic nature. We will describe two versions of the so-called successor RAM, with the successor function as the only arithmetic operation....the RAM0 version deserves special attention for its extreme simplicity; its instruction set consists of only a few one letter codes, without any (explicit) addressing.� (Schönhage (1980) p.494)

Example #1: The most general (and original) Turing machine – single-tape with left-end, multi-symbols, 5-tuple instruction format – can be atomized into the Turing machine of Boolos-Burgess-Jeffrey (2002) – single-tape with no ends, two "symbols" { B, | } (where B symbolizes "blank square" and | symbolizes "marked square"), and a 4-tuple instruction format. This model in turn can be further atomized into a Post-Turing machine– single-tape with no ends, two symbols { B, | }, and a 0- and 1-parameter instruction set ( e.g. { Left, Right, Mark, Erase, Jump-if-marked to instruction xxx, Jump-if-blank to instruction xxx, Halt } ).

Example #2: The RASP can be reduced to a RAM by moving its instructions off the tape and (perhaps with translation) into its finite-state machine “table� of instructions, the RAM stripped of its indirect instruction and reduced to a 2- and 3-operand “abacus� register machine; the abacus in turn can be reduced to the 1- and 2-operand Minsky (1967)/Shepherdson-Sturgis (1963) counter machine, which can be further atomized into the 0- and 1-operand instructions of Schönhage (and even a 0-operand Schönhage-like instruction set is possible).

Cost of atomization:

Atomization comes at a (usually severe) cost: while the resulting instructions may be “simpler�, atomization (usually) creates more instructions and the need for more computational steps. As shown in the following example the increase in computation steps may be significant (i.e. orders of magnitude – the following example is “tame�), and atomization may (but not always, as in the case of the Post-Turing model) reduce the usability and readability of “the machine code�. For more see Turing tarpit.

Example: The single register machine instruction "INC 3" – increment the contents of register #3, i.e. increase its count by 1 – can be atomized into the 0-parameter instruction set of Schönhage, but with the equivalent number of steps to accomplish the task increasing to 7; this number is directly related to the register number “n� i.e. 4+n):

More examples can be found at the pages Register machine and Random access machine where the addition of "convenience instructions" CLR h and COPY h1,h1 are shown to reduce the number of steps dramatically. Indirect addressing is the other significant example.

Precise specification of Turing-machine algorithm m+n

As described in Algorithm characterizations per the specifications of Boolos-Burgess-Jeffrey (2002) and Sipser (2006), and with a nod to the other characterizations we proceed to specify:

(i) Number format: unary strings of marked squares (a "marked square" signfied by the symbol 1) separated by single blanks (signified by the symbol B) e.g. “2,3� = B11B111B
(ii) Machine type: Turing machine: single-tape left-ended or no-ended, 2-symbol { B, 1 }, 4-tuple instruction format.
(iii) Head location: See more at “Implementation Description� below. A symbolic representation of the head's location in the tape's symbol string will put the current state to the right of the scanned symbol. Blank squares may be included in this protocol. The state's number will appear with brackets around it, or sub-scripted. The head is shown as

Miller effect

In electronics, the Miller effect accounts for the increase in the equivalent input capacitance of an inverting voltage amplifier due to amplification of the capacitance between the input and output terminals. The additional input capacitance due to the Miller effect is given by

C_{M}=C (1+A_v)\,

where A_v is the gain of the amplifier and C is the feedback capacitance.

Although the term Miller effect normally refers to capacitance, any impedance connected between the input and another node exhibiting gain can modify the amplifier input impedance via this effect. These properties of Miller effect are generalized by Miller theorem.

History

The Miller effect was named after John Milton Miller. When Miller published his work in 1920, he was working on vacuum tube triodes, however the same theory applies to more modern devices such as bipolar and MOS transistors.

Derivation

Consider an ideal inverting voltage amplifier of gain A_v with an impedance Z connected between its input and output nodes. The output voltage is therefore V_o =- A_v V_i. Assuming that the amplifier input draws no current, all of the input current flows through Z, and is therefore given by

I_i = \frac{V_i - V_o}{Z} = \frac{V_i (1 + A_v)}{Z}.

The input impedance of the circuit is

Z_{in} = \frac{V_i}{I_i} = \frac{Z}{1+A_v}.

If Z represents a capacitor with impedance Z = \frac{1}{s C}, the resulting input impedance is

Z_{in} = \frac{1}{s C_{M}} \quad \mathrm{where} \quad C_{M}=C (1+A_v).

Thus the effective or Miller capacitanceCMis the physical C multiplied by the factor (1+A_v).

Effects

As most amplifiers are inverting (i.e. A_v < 0), the effective capacitance at their inputs is increased due to the Miller effect. This can lower the bandwidth of the amplifier, reducing its range of operation to lower frequencies. The tiny junction and stray capacitances between the base and collector terminals of a Darlington transistor, for example, may be drastically increased by the Miller effects due to its high gain, lowering the high frequency response of the device.

It is also important to note that the Miller capacitance is the capacitance seen looking into the input. If looking for all of the RC time constants (poles) it is important to include as well the capacitance seen by the output. The capacitance on the output is often neglected since it sees {C}({1-1/A_v}) and amplifier outputs are typically low impedance. However if the amplifier has a high impedance output, such as if a gain stage is also the output stage, then this RC can have a significant impact on the performance of the amplifier. This is when pole splitting techniques are used.

The Miller effect may also be exploited to synthesize larger capacitors from smaller ones. One such example is in the stabilization of feedback amplifiers, where the required capacitance may be too large to practically include in the circuit. This may be particularly important in the design of integrated circuit, where capacitors can consume significant area, increasing costs.

Mitigation

The Miller effect may be undesired in many cases, and approaches may be sought to lower its impact. Several such techniques are used in the design of amplifiers.

A current buffer stage may be added at the output to lower the gain A_v between the input and output terminals of the amplifier (though not necessarily the overall gain). For example, a common base may be used as a current buffer at the output of a common emitter stage, forming a cascode. This will typically reduce the Miller effect and increase the bandwidth of the amplifier.

Alternatively, a voltage buffer may be used before the amplifier input, reducing the effective source impedance seen by the input terminals. This lowers the RC time constant of the circuit and typically increases the bandwidth.

Impact on frequency response

Figure 2 shows an example of Figure 1 where the impedance coupling the input to the output is the coupling capacitor CC. AThévenin voltage source VAdrives the circuit with Thévenin resistance RA. At the output a parallel RC-circuit serves as load. (The load is irrelevant to this discussion: it just provides a path for the current to leave the circuit.) In Figure 2, the coupling capacitor delivers a current jωCC( vi - vo ) to the output circuit.

Figure 3 shows a circuit electrically identical to Figure 2 using Miller's theorem. The coupling capacitor is replaced on the input side of the circuit by the Miller capacitance CM, which draws the same current from the driver as the coupling capacitor in Figure 2. Therefore, the driver sees exactly the same loading in both circuits. On the output side, a dependent current source in Figure 3 delivers the same current to the output as does the coupling capacitor in Figure 2. That is, the R-C-load sees the same current in Figure 3 that it does in Figure 2.

In order that the Miller capacitance draw the same current in Figure 3 as the coupling capacitor in Figure 2, the Miller transformation is used to relate CMto CC. In this example, this transformation is equivalent to setting the currents equal, that is

\ j\omega C_C ( v_i - v_O ) = j \omega C_M v_i,

or, rearranging this equation

C_M = C_C \left( 1 + \frac {v_o} {v_i} \right ) = C_C (1 + A_v).

This result is the same as CMof the Derivation Section.

The present example with Avfrequency independent shows the implications of the Miller effect, and therefore of CC, upon the frequency response of this circuit, and is typical of the impact of the Miller effect (see, for example,common source). If CC= 0 F, the output voltage of the circuit is simply Av vA, independent of frequency. However, when CCis not zero, Figure 3 shows the large Miller capacitance appears at the input of the circuit. The voltage output of the circuit now becomes

v_o =- A_v v_i = A_v \frac {v_A} {1+

Sound effect

For the album by The Jam, seeSound Affects.

Sound effects or audio effects are artificially created or enhanced sounds, or sound processes used to emphasize artistic or other content of films, television shows, live performance, animation, video games, music, or other media. In motion picture and television production, a sound effect is a sound recorded and presented to make a specific storytelling or creative point without the use of dialogue or music. The term often refers to a process applied to a recording, without necessarily referring to the recording itself. In professional motion picture and television production, dialogue, music, and sound effects recordings are treated as separate elements. Dialogue and music recordings are never referred to as sound effects, even though the processes applied to them, such as reverberation or flanging effects, often are called "sound effects".

Film

In the context of motion pictures and television, sound effects refers to an entire hierarchy of sound elements, whose production encompasses many different disciplines, including:

  • Hard sound effects are common sounds that appear on screen, such as door slams, weapons firing, and cars driving by.
  • Background (or BG) sound effects are sounds that do not explicitly synchronize with the picture, but indicate setting to the audience, such as forest sounds, the buzzing of fluorescent lights, and car interiors. The sound of people talking in the background is also considered a "BG," but only if the speaker is unintelligible and the language is unrecognizable (this is known as walla). These background noises are also called ambience or atmos ("atmosphere").
  • Foley sound effects are sounds that synchronize on screen, and require the expertise of a Foley artist to record properly. Footsteps, the movement of hand props (e.g., a tea cup and saucer), and the rustling of cloth are common foley units.
  • Design sound effects are sounds that do not normally occur in nature, or are impossible to record in nature. These sounds are used to suggest futuristic technology in a science fiction film, or are used in a musical fashion to create an emotional mood.

Each of these sound effect categories is specialized, with sound editors known as specialists in an area of sound effects (e.g. a "Car cutter" or "Guns cutter").

The process can be separated into two steps: the recording of the effects, and the processing. Sound effects are often custom recorded for each project, but to save time and money a recording may be taken from a library of stock sound effects (such as the famous Wilhelm scream). A sound effect library might contain every effect a producer requires, yet the timing and aesthetics of a tailor-made sound are often preferred.

Foley is another method of adding sound effects. Foley is more of a technique for creating sound effects than a type of sound effect, but it is often used for creating the incidental real world sounds that are very specific to what is going on onscreen, such as footsteps. With this technique the action onscreen is essentially recreated in order to try and match it as closely as possible. If done correctly it is very hard for audiences to tell what sounds were added and what sounds were originally recorded (location sound).

In the early days of film and radio, Foley artists would add sounds in realtime or pre-recorded sound effects would be played back from analogue discs in realtime (while watching the picture). Today, with effects held in digital format, it is easy to create any required sequence to be played in any desired timeline.

Video games

The principles involved with modern video game sound effects (since the introduction of sample playback) are essentially the same as those of motion pictures. Typically a game project requires two jobs to be completed: sounds must be recorded or selected from a library and a sound engine must be programmed so that those sounds can be incorporated into the game's interactive environment.

In earlier computers and video game systems, sound effects were typically produced using sound synthesis. In modern systems, the increases in storage capacity and playback quality has allowed sampled sound to be used. The modern systems also frequently utilize positional audio, often with hardware acceleration, and real-time audio post-processing, which can also be tied to the 3D graphics development. Based on the internal state of the game, multiple different calculations can be made. This will allow for, for example, realistic sound dampening, echoes and doppler effect.

Historically the simplicity of game environments reduced the required number of sounds needed, and thus only one or two people were directly responsible for the sound recording and design. As the video game business has grown and computer sound reproduction quality has increased, however, the team of sound designers dedicated to game projects has likewise grown and the demands placed on them may now approach those of mid-budget motion pictures.

Music

Some pieces of music use sound effects that are made by a music instrument or by other means. An early example is 18th century Toy Symphony. Richard Wagner in the opera Das Rheingold (1869) lets a choir of anvils introduce the scene of the dwarfs who have to work in the mines, similar to the introduction of the dwarfs in the 1937 Disney movie Snow White. Klaus Doldingers soundtrack for the 1981 movie Das Boot includes a title score with an sonar sound to reflect the U-boat setting.

Recording

The most realistic sound effects originate from original sources; the closest sound to machine-gun fire that we can replay is an original recording of actual machine guns. Less realistic sound effects are digitally synthesized

Aid effectiveness

Aid effectiveness is the effectiveness of development aid in achieving economic or human development (or development targets). Aid agencies are always looking for new ways to improve aid effectiveness, including conditionality, capacity building and support for improved governance.

Historical background

The international aid system was born out of the ruins of the Second World War, when the United States used their aid funds to help rebuild Europe. The system came of age during the Cold War era from the 1960s to the 1980s. During this time, foreign aid was often used to support client states in the developing world. Even though funds were generally better utilised in countries that were well governed, they were instead directed toward allies. After the end of the Cold War, the declared focus of official aid began to move further towards the alleviation of poverty and the promotion of development. The countries that were in the most need and poverty became more of a priority now. It is against this background that the international aid effectiveness movement began taking shape in the late 1990s. Donor governments and aid agencies began to realise that their many different approaches and requirements were imposing huge costs on developing countries and making aid less effective. They began working with each other, and with developing countries, to harmonise their work in order to improve its impact.

The aid effectiveness movement picked up steam in 2002 at the [http://www.un.org/esa/ffd/ffdconf International Conference on Financing for Development] in Monterrey, Mexico, which established the Monterrey Consensus. There, the international community agreed to increase its funding for development—but acknowledged that more money alone was not enough. Donors and developing countries alike wanted to know that aid would be used as effectively as possible. They wanted it to play its optimum role in helping poor countries achieve the [http://www.un.org/millenniumgoals/bkgd.shtml Millennium Development Goals], the set of targets agreed by 192 countries in 2000 which aimed to halve world poverty by 2015. A new paradigm of aid as a partnership, rather than a one-way relationship between donor and recipient, was evolving.

In 2003, aid officials and representatives of donor and recipient countries gathered in Rome for the [http://www.aidharmonization.org High Level Forum on Harmonization]. At this meeting, convened by the [http://www.oecd.org Organisation for Economic Co-operation and Development] (OECD), donor agencies committed to work with developing countries to better coordinate and streamline their activities at country level. They agreed to take stock of concrete progress before meeting again in Paris in early 2005. In Paris, countries from around the world endorsed the Paris Declaration on Aid Effectiveness, a more comprehensive attempt to change the way donor and developing countries do business together, based on principles of partnership. Three years on, in 2008, the [http://www.accrahlf.net Third High Level Forum] in Accra, Ghana took stock of progress and built on the Paris Declaration to accelerate the pace of change. The principles agreed upon in the declarations are however still not always practiced by donors and multilateral bodies. In the case of Cambodia, two experts have assessed donor misbehaviour.

Critiques of the impact of aid have become more vociferous as the global campaigns to increase aid have gained momentum, particularly since 2000. There are those who argue that aid is never effective. Most aid practitioners agree that aid has not always worked to its maximum potential, but that it has achieved significant impact when it has been properly directed and managed, particularly in areas such as health and basic education. There is broad agreement that aid is only one factor in the complex process needed for poor countries to develop, and that economic growth and good governance are prerequisites. The OECD has explored—through peer reviews and other work by the Development Assistance Committee (DAC)—the reasons why aid has and has not worked in the past. This has resulted in a body of best practices and principles that can be applied globally to make aid work better. The ultimate aim of aid effectiveness efforts today is to help developing countries build well functioning local structures and systems so that they are able to manage their own development and reduce their dependency on aid.

Related research on aid effectiveness

Micro-Macro Paradox

The major findings by Paul Mosley and others concludes that it is impossible to establish any significant correlation between aid and growth rate of GNP in developing countries. One reason for this is the fungibility and the leakage of the aid into unproductive expenditure in the public sector.

However, at a micro level, all donor agencies regularly report the success of most of their projects and programs. This contrast is known as the micro-macro paradox.

Mosley’s result was further confirmed by Peter Boone who argued that aid is ineffective because it tends to finance consumption rather than investments. Boone also affirmed the micro-macro paradox.

One challenge for assessing the effectiveness of aid is that not all aid is intended to generate economic growth. Some aid is intended for humanitarian purposes; and some may simply improve the standard of living of people in developing countries.

The micro-macro paradox has also been attributed to inadequate assessment practices. For example, conventional assessment techniques often over-emphasize inputs and outputs without taking sufficient account of societal impacts. The shortcomings of prevalent assessment practices have led to a gradual international trend towards more rigorous methods of impact assessment.

Research by Burnside and Dollar (2000)

Burnside and Dollar provide empirical evidence that the impact of aid on GDP growth is positive and significant in developing countries with "sound" institutions and economic policies (i.e. open trade, fiscal and monetary discipline); but aid has less or no significant impact in countries with "poor" institutions and policies. As economists at the World Bank, Burnside and Dollar advocated selectivity in aid allocation. They argue that aid should be systematically allocated to countries conditional on "good" policy.

Burnside and Dollar’s findings have been placed under heavy scrutiny since their publication. Easterly and others re-estimated the Burnside and Dollar estimate with an updated and extended dataset, but could not find any significant aid-policy interaction term. New evidence seems to suggest that Burnside and Dollar’s results are not statistically robust.

Studies and Literature on Aid Effectiveness

One problem of the studies on aid is that there is a lack of differentiation between the different types of aid. Some type of aids such as short term aid do not have an impact on economic growth while other aids used for infrastructure and investments will result in a positive economic growth.

The emerging stories from aid-growth literature are that aid is effective under a wide variety of circumstances and that nonlineari


From Yahoo Answers

Question:

Answers:Digital cameras. A photon hits the sensor, liberating an electron which is then measured by the camera.

Question:pharmacology

Answers:If you're talking about drug interactions, here are some definitions. An antagonistic drug effect is when 2 drugs negate each other. An example would be a drug that causes high blood pressure such as a stimulant, and a drug that lowers blood pressure, such as nitroglycerin. They would be considered antagonistic with regard to blood pressure. This may be bad, since you may not get the desired effect when you take the 2 drugs together. An additive drug effect is when the effects of 2 drugs add up together. This can cause toxicity. An example would be taking 2 drugs that raise potassium levels, which can be fatal. You can find a more complete definition for the 2 drug effects at: http://www.hanstenandhorn.com/article-d-i.html And finally, a synergistic drug effect is when the effects of 2 drugs add up to more than the expected sum. Wikipedia gives an example of "Codeine mixed with Paracetamol to enhance the action of codeine as a pain reliever", along with several other examples. http://en.wikipedia.org/wiki/Synergy Feel free to e-mail me if you need any additional help.

Question:I need to do a project for Geography on the effect of weathering on any monument. We can do effects of chemical weathering/mechanical weathering/ biological weathering/ any other type of weathering (???) Are there any good examples of clear and apparent weathering on any important monuments? I think that one of the wonders of the world would be a good pick like the Taj Mahal (chemical weathering - acid rain) or the Pyramids (mechanical weathering) but any monument will do. Thanks in advance. :)

Answers:Mechanical weathering, particularly through rain and freeze-thaw cycles, has shattered Mt. Rushmore. Seriously, most of the features such as noses and eyebrows are being held on with giant bolts and massive amounts of caulk. You just can't see them from the tourist photo spot. This rock was shattered to begin with, which is why Jefferson is tucked to far back behind Washington -- there was a huge crack that they had to get away from. So... freeze-thaw, running water, raindrop impact, all mechanical, very little chemical. St. Paul's Cathedral and the Cathedral at Notre Dame have been severely affected by chemical weathering in the form of acid rain. Their detailed features like gargoyles and filigree look as if they're just melting away, and St. Paul's even has an entire wing that's simply dissolving and falling down. So... acid rain dissolution of primarily marble and granite, sulfuric acid, carbonic acid, nitric acids, and hydrochloric acids, all chemical, with some mechanical.

Question:Hi! I have a 3 x 3 ANOVA design for a study and I want to get some clarification. Specifically: Factor A = Site of Location; 3 levels (Area Neutral, Area A, Area B) Factor B = Duration of Time;3 levels (50s, 100s, 150s) DV= Score Now, I haven't learned how to do simple effects analysis in SPSS, but is it plausible to just use a 1 way anova + post hoc tests to accomplish simple effect analysis? Let me explain. For example, I want to examine the simple effects to clarify the results for the Site of Location factor. Could I run a 1 way Anova using just the 'Site of Location Factor' as my IV, and use Score as my DV and then a subsequent Post Hoc Test to see which groups differ from which?

Answers:Yes you could. But you would exclude the contribution of any interactions between your three factors. If you can logically do that through a simplifying assumption you should be OK. My question would be if you were not after a model of variances, why bother with the ANOVA at all? If you are looking for an optimal point and your factors are not discreet choices, you should consider a response surface method. Which mathematically always marches "up" from the last point using slope methods. The risk being your local optimum you find is TRUE optimum. But if you are comparing discrete settings and you want to know what is best, well a t-test might be more in order or a B vs C comparison. ANOVA is for looking at contribution of factors to variances.

From Youtube

Effective Listening :Effective Listening Example

Physics: Doppler Effect :Watch more free lectures and examples of Physics at www.educator.com Other subjects include Algebra, Trigonometry, Calculus, Biology, Chemistry, Statistics, and Computer Science. -All lectures are broken down by individual topics -No more wasted time -Just search and jump directly to the answer