Explore Related Concepts


difference between theory x and theory y
Best Results From Wikipedia Yahoo Answers Youtube
From Wikipedia
Theory X and Theory Y are theories of human motivation created and developed by Douglas McGregor at the MIT Sloan School of Management in the 1960s that have been used in human resource management, organizational behavior, organizational communication and organizational development. They describe two very different attitudes toward workforce motivation. McGregor felt that companies followed either one or the other approach. He also thought that the key to connecting selfactualization with work is determined by the managerial trust of subordinates.
Theory X
In this theory, which has been proven countereffective in most modern practice, management assumes employees are inherently lazy and will avoid work if they can and that they inherently dislike work. As a result of this, management believes that workers need to be closely supervised and comprehensive systems of controls developed. A hierarchical structure is needed with narrow span of control at each and every level. According to this theory, employees will show little ambition without an enticing incentive program and will avoid responsibility whenever they can. According to Michael J. Papa (Ph.D., Temple University; M.A., Central Michigan University; B.A., St. Johnâ€™s University), if the organizational goals are to be met, theory X managers rely heavily on threat and coercion to gain their employee's compliance. Beliefs of this theory lead to mistrust, highly restrictive supervision, and a punitive atmosphere. The Theory X manager tends to believe that everything must end in blaming someone. He or she thinks all prospective employees are only out for themselves. Usually these managers feel the sole purpose of the employee's interest in the job is money. They will blame the person first in most situations, without questioning whether it may be the system, policy, or lack of training that deserves the blame. A Theory X manager believes that his or her employees do not really want to work, that they would rather avoid responsibility and that it is the manager's job to structure the work and energize the employee. One major flaw of this management style is it is much more likely to cause Diseconomies of Scale in large businesses.
Theory Y
In this theory, management assumes employees may be ambitious and selfmotivated and exercise selfcontrol. It is believed that employees enjoy their mental and physical work duties. According to Papa, to them work is as natural as play . They possess the ability for creative problem solving, but their talents are underused in most organizations. Given the proper conditions, theory Y managers believe that employees will learn to seek out and accept responsibility and to exercise selfcontrol and selfdirection in accomplishing objectives to which they are committed. A Theory Y manager believes that, given the right conditions, most people will want to do well at work. They believe that the satisfaction of doing a good job is a strong motivation. Many people interpret Theory Y as a positive set of beliefs about workers. A close reading of The Human Side of Enterprise reveals that McGregor simply argues for managers to be open to a more positive view of workers and the possibilities that this creates. He thinks that Theory Y managers are more likely than Theory X managers to develop the climate of trust with employees that is required for human resource development. It's here through human resource development that is a crucial aspect of any organization. This would include managers communicating openly with subordinates, minimizing the difference between superiorsubordinate relationships, creating a comfortable environment in which subordinates can develop and use their abilities. This climate would include the sharing of decision making so that subordinates have say in decisions that influence them. This theory is a positive view to the employees, meaning that the employer is under a lot less pressure than someone who is influenced by a theory X management style.
Theory X and Theory Y combined
For McGregor, Theory X and Y are not different ends of the same continuum. Rather they are two different continua in themselves. Thus, if managers need to apply Theory Y principles, that does not preclude them from being a part of Theory X & Y.
McGregor and Maslow's hierarchy
McGregor's work was based on Maslow's hierarchy of needs. He grouped Maslow's hierarchy into "lower order" (Theory X) needs and "higher order" (Theory Y) needs. He suggested that management could use either set of needs to motivate employees. As management theorists became familiar with Maslow's work, they soon realized the possibility of connecting higher level needs to worker motivation. If organizational goals and individual needs could be integrated so that people would acquire selfesteem and, ultimately, selfactualization through work, then motivation would be selfsustaining. Today, his Theory Y principle influences the design of personnel policies, affects the way companies conduct performance reviews, and shapes the idea of pay for performance. According to the Douglas McGregor: Theory X and Theory Y article, "He is the reason we use the term 'human resources' instead of personnel department" says Brzezinski. "The idea that people are assets was unheard of before McGregor."
Criticisms
Today the theories are seldom used explicitly, largely because the insights they provided have influenced and been incorporated by further generations of management theorists and practitioners. More commonly, workplaces are described as "hard" versus "soft." Taken too literally any such dichotomy including Theory X and Y seem to represent unrealistic extremes. Most employees (and managers) fall somewhere in between these poles. Naturally, McGregor was well aware of the heuristic as opposed to literal way in which such distinctions are useful. Theory X and Theory Y are still important terms in the field of management and motivation. Recent studies have questioned the rigidity of the model, but McGregor's XY Theory remains a guiding principle of positive approaches to management, to organizational development, and to improving organizational culture.
General
BET theory aims to explain the physical adsorption of gasmolecules on a solidsurface and serves as the basis for an important analysis technique for the measurement of the specific surface area of a material. In 1938, Stephen Brunauer, Paul Hugh Emmett, and Edward Teller published an article about the BET theory in a journal for the first time; â€œBETâ€� consists of the first initials of their family names.
The concept of the theory is an extension of the Langmuir theory, which is a theory for monolayer molecular adsorption, to multilayer adsorption with the following hypotheses: (a) gas molecules physically adsorb on a solid in layers infinitely; (b) there is no interaction between each adsorption layer; and (c) the Langmuir theory can be applied to each layer. The resulting BET equation is expressed by (1):
\frac{1}{v \left [ \left ( {P_0}/{P} \right ) 1 \right ]} = \frac{c1}{v_m c} \left ( \frac{P}{P_0} \right ) + \frac{1}{v_m c} \ \ \ \ \ \ \ (1)
P and P_0 are the equilibrium and the saturation pressure of adsorbates at the temperature of adsorption, v is the adsorbed gas quantity (for example, in volume units), and v_m is the monolayer adsorbed gas quantity. c is the BET constant, which is expressed by (2):
c = \exp\left(\frac{E_1  E_L}{RT}\right) \ \ \ \ \ \ \ (2)
E_1 is the heat of adsorption for the first layer, and E_L is that for the second and higher layers and is equal to the heat of liquefaction.
Equation (1) is an adsorption isotherm and can be plotted as a straight line with {1}/{v [ ({P_0}/{P}) 1 ]} on the yaxis and \phi={P}/{P_0} on the xaxis according to experimental results. This plot is called a BET plot. The linear relationship of this equation is maintained only in the range of 0.05 < {P}/{P_0} < 0.35. The value of the slope A and the yintercept I of the line are used to calculate the monolayer adsorbed gas quantity v_m and the BET constant c. The following equations can be used:
v_m = \frac{1}{A+I}\ \ \ \ \ \ \ (3) c = 1+\frac{A}{I}\ \ \ \ \ \ \ (4)
The BET method is widely used in surface science for the calculation of surface areas of solids by physical adsorption of gas molecules. A total surface area S_{total} and a specific surface area S are evaluated by the following equations:
S_{BET,total} = \frac{\left ( v_m N s \right )}{V} \ \ \ \ \ \ \ (5) where v_m is in units of volume which are also the units of the molar volume of the adsorbate gas S_{BET} = \frac{S_{total}}{a} \ \ \ \ \ \ \ (6)
Example
Cement paste
By application of the BET theory it is possible to determine the inner surface of hardened cement paste. If the quantity of adsorbed water vapor is measured at different levels of relative humidity a BET plot is obtained. From the slope A and yintersection I on the plot it is possible to calculate v_m and the BET constant c. In case of cement paste hardened in water (T=97Â°C), the slope of the line is A=24.20 and the yintersection I=0.33; from this follows v_m = \frac{1}{A+I}=0.0408 g/g c = 1+\frac{A}{I}=73.6 From this the specific BET surface area S_{BET} can be calculated by use of the above mentioned equation (one water molecule covers s=0.114 nm^2). It follows thus S_{BET} = 156 m^2/g which means that hardened cement paste has an inner surface of 156 square meters per g of cement.
Activated Carbon
For example, activated carbon, which is a strong adsorbate and usually has an adsorption cross section s of 0.16 nm^{2} for nitrogen adsorption at liquid nitrogen temperature, is revealed from experimental data to have a large surface area around 3000 mÂ² g^{1}. Moreover, in the field of solid catalysis, the surface area of catalysts is an important factor in catalytic activity. Porous inorganic materials such as mesoporous silica and layer clay minerals have high surface areas of several hundred mÂ² g^{1} calculated by the BET method, indicating the possibility of application for efficient catalytic materials.
In mathematics and computer science, graph theory is the study of graphs: mathematical structures used to model pairwise relations between objects from a certain collection. A "graph" in this context refers to a collection ofvertices or 'nodes' and a collection of edges that connect pairs of vertices. A graph may be undirected, meaning that there is no distinction between the two vertices associated with each edge, or its edges may be directed from one vertex to another; see graph (mathematics) for more detailed definitions and for other variations in the types of graphs that are commonly considered. The graphs studied in graph theory should not be confused with "graphs of functions" and other kinds of graphs.
Graphs are one of the prime objects of study in Discrete Mathematics. Refer to Glossary of graph theory for basic definitions in graph theory.
History
The paper written by Leonhard Euler on the Seven Bridges of KÃ¶nigsbergand published in 1736 is regarded as the first paper in the history of graph theory. This paper, as well as the one written byVandermonde on the knight problem, carried on with the analysis situs initiated byLeibniz. Euler's formula relating the number of edges, vertices, and faces of a convex polyhedron was studied and generalized by Cauchy and L'Huillier, and is at the origin of topology.
More than one century after Euler's paper on the bridges of KÃ¶nigsberg and while Listing introduced topology, Cayley was led by the study of particular analytical forms arising from differential calculus to study a particular class of graphs, the trees. This study had many implications in theoreticalchemistry. The involved techniques mainly concerned the enumeration of graphs having particular properties. Enumerative graph theory then rose from the results of Cayley and the fundamental results published by PÃ³lya between 1935 and 1937 and the generalization of these by De Bruijn in 1959. Cayley linked his results on trees with the contemporary studies of chemical composition. The fusion of the ideas coming from mathematics with those coming from chemistry is at the origin of a part of the standard terminology of graph theory.
In particular, the term "graph" was introduced by Sylvester in a paper published in 1878 in Nature, where he draws an analogy between "quantic invariants" and "covariants" of algebra and molecular diagrams:
 "[...] Every invariant and covariant thus becomes expressible by a graph precisely identical with a KekulÃ©an diagram or chemicograph. [...] I give a rule for the geometrical multiplication of graphs, i.e. for constructing a graph to the product of in or covariants whose separate graphs are given. [...]" (italics as in the original).
One of the most famous and productive problems of graph theory is the four color problem: "Is it true that any map drawn in the plane may have its regions colored with four colors, in such a way that any two regions having a common border have different colors?" This problem was first posed by Francis Guthrie in 1852 and its first written record is in a letter of De Morgan addressed to Hamilton the same year. Many incorrect proofs have been proposed, including those by Cayley, Kempe, and others. The study and the generalization of this problem by Tait, Heawood, Ramsey and Hadwiger led to the study of the colorings of the graphs embedded on surfaces with arbitrary genus. Tait's reformulation generated a new class of problems, the factorization problems, particularly studied by Petersen and KÅ‘nig. The works of Ramsey on colorations and more specially the results obtained by TurÃ¡n in 1941 was at the origin of another branch of graph theory, extremal graph theory.
The four color problem remained unsolved for more than a century. In 1969 Heinrich Heesch published a method for solving the problem using computers. A computeraided proof produced in 1976 by Kenneth Appel and Wolfgang Haken makes fundamental use of the notion of "discharging" developed by Heesch. The proof involved checking the properties of 1,936 configurations by computer, and was not fully accepted at the time due to its complexity. A simpler p
Valence shell electron pair repulsion (VSEPR) theory is a model in chemistry used to predict the shape of individual molecules based upon the extent of electronpair electrostatic repulsion. It is also named GillespieNyholm theory after its two main developers. The acronym "VSEPR" is sometimes pronounced "vesper" for ease of pronunciation; however, the phonetic pronunciation is technically more correct.
The premise of VSEPR is that the valence electron pairs surrounding an atom mutually repel each other, and will therefore adopt an arrangement that minimizes this repulsion, thus determining the molecular geometry. The number of electron pairs surrounding an atom, both bonding and nonbonding, is called its steric number.
VSEPR theory is usually compared and contrasted with valence bond theory, which addresses molecular shape through orbitals that are energetically accessible for bonding. Valence bond theory concerns itself with the formation of sigma and pi bonds. Molecular orbital theory is another model for understanding how atoms and electrons are assembled into molecules and polyatomic ions.
VSEPR theory has long been criticized for not being quantitative, and therefore limited to the generation of "crude", even though structurally accurate, molecular geometries of covalent molecules. However, molecular mechanicsforce fields based on VSEPR have also been developed.
History
The idea of a correlation between molecular geometry and number of valence electrons (both shared and unshared) was first presented in a Bakerian Lecture in 1940 by Nevil Sidgwick and Herbert Powell at the University of Oxford. In 1957 Ronald Gillespie and Ronald Sydney Nyholm at University College London refined this concept to build a more detailed theory capable of choosing between various alternative geometries.
Description
VSEPR theory mainly involves predicting the layout of electron pairs surrounding one or more central atoms in a molecule, which are bonded to two or more other atoms. The geometry of these central atoms in turn determines the geometry of the larger whole.
The number of electron pairs in the valence shell of a central atom is determined by drawing the Lewis structure of the molecule, expanded to show all lone pairs of electrons, alongside protruding and projecting bonds. Where two or more resonance structures can depict a molecule, the VSEPR model is applicable to any such structure. For the purposes of VSEPR theory, the multiple electron pairs in a multiple bond are treated as though they were a single "pair".
These electron pairs are assumed to lie on the surface of a sphere centered on the central atom, and since they are negatively charged, tend to occupy positions that minimizes their mutual electrostatic repulsions by maximising the distance between them. The number of electron pairs therefore determine the overall geometry that they will adopt.
For example, when there are two electron pairs surrounding the central atom, their mutual repulsion is minimal when they lie at opposite poles of the sphere. Therefore, the central atom is predicted to adopt a linear geometry. If there are 3 electron pairs surrounding the central atom, their repulsion is minimized by placing them at the vertices of a triangle centered on the atom. Therefore, the predicted geometry is trigonal. Similarly, for 4 electron pairs, the optimal arrangement is tetrahedral.
This overall geometry is further refined by distinguishing between bonding and nonbonding electron pairs. A bonding electron pair is involved in a sigma bond with an adjacent atom, and, being shared with that other atom, lies farther away from the central atom than does a nonbonding pair (lone pair), which is held close to the central atom by its positivelycharged nucleus. Therefore, the repulsion caused by the lone pair is greater than the repulsion caused by the bonding pair. As such, when the overall geometry has two sets of positions that experience different degrees of repulsion, the lone pair(s) will tend to occupy the positions that experience less repulsion. In other words, the lone pairlone pair (lplp) repulsion is considered to be stronger than the lone pairbonding pair (lpbp) repulsion, which in turn is stronger than the bonding pairbonding pair (bpbp) repulsion. Hence, the weaker bpbp repulsion is preferred over the lplp or lpbp repulsion.
This distinction becomes important when the overall geometry has two or more nonequivalent positions. For example, when there are 5 electron pairs surrounding the central atom, the optimal arrangement is a trigonal bipyramid. In this geometry, two positions lie at 180Â° angles to each other and 90Â° angles to the other 3 adjacent positions, whereas the other 3 positions lie at 120Â° to each other and at 90Â° to the first two positions. The first two positions therefore experience more repulsion than the last three positions. Hence, when there are one or more lone pairs, the lone pairs will tend to occupy the last three positions first.
The difference between lone pairs and bonding pairs may also be used to rationalize deviations from idealized geometries. For example, the H_{2}O molecule has four electron pairs in its valence shell: two lone pairs and two bond pairs. The four electron pairs are spread so as to point roughly towards the apices of a tetrahedron. However, the bond angle between the two OH bonds is only 104.5Â°, rather than the 109.5Â° of a regular tetrahedron, because the two lone pairs (whose density or probability envelopes lie closer to the oxygen nucleus) exert a greater mutual repulsion than the two bond pairs.
AXE Method
The "AXE method" of electron counting is commonly used when applying the VSEPR theory. The A represents the central atom and always has an implied subscript one. The X represents the number of sigma bonds between the central atoms and outside atoms. Multiple covalent bonds (
From Yahoo Answers
Answers:In which field do these 'theories' try to attract attention?
Answers:First, explicitly solve the equation for x and y: x=(804y)/3, y=(803x)/4. Let f: D>R be defined by f(x)=(803x)/4. Since x and y must be natural numbers, we can set a delimiter. The domain of y must be determined. Let x=26 be the upper bound of the domain of y. The set of natural numbers is the set of positive integers; therefore, the lower bound must be 1. The highest number that x can be without causing y to become negative is 26, so the possible values of the domain are found between 1 and 26. To see this, try all possibilities: For x=1, y is not in the set of natural numbers. For x=2, y is not in the set of natural numbers. For x=3, y is not in the set of natural numbers. For x=4, y is in the set of natural numbers. For x=5, y is not in the set of natural numbers. For x=7, y is not in the set of natural numbers. For x=7, y is not in the set of natural numbers. For x=8, y is in the set of natural numbers. For x=9, y is not in the set of natural numbers. For x=10, y is not in the set of natural numbers. For x=11, y is not in the set of natural numbers. For x=12, y is in the set of natural numbers. For x=13, y is not in the set of natural numbers. For x=14, y is not in the set of natural numbers. For x=15, y is not in the set of natural numbers. For x=16, y is in the set of natural numbers. For x=17, y is not in the set of natural numbers. For x=18, y is not in the set of natural numbers. For x=19, y is not in the set of natural numbers. For x=20, y is in the set of natural numbers. For x=21, y is not in the set of natural numbers. For x=22, y is not in the set of natural numbers. For x=23, y is not in the set of natural numbers. For x=24, y is in the set of natural numbers. For x=25, y is not in the set of natural numbers. For x=26, y is not in the set of natural numbers. The domain D of f is equal to {4,8,12,16,20,24}. Similarly, the range can be determined. Let y=20 be the upper bound of the range of f. Try all possibilities: For y=1, x is not in the set of natural numbers. For y=2, x is in the set of natural numbers. For y=3, x is not in the set of natural numbers. For y=4, x is not in the set of natural numbers. For y=5, x is in the set of natural numbers. For y=6, x is not in the set of natural numbers. For y=7, x is not in the set of natural numbers. For y =8, x is in the set of natural numbers. For y=9, x is not in the set of natural numbers. For y=10, x is not in the set of natural numbers. For y=11, x is in the set of natural numbers. For y=12, x is not in the set of natural numbers. For y=13, x is not in the set of natural numbers. For y=14, x is in the set of natural numbers. For y=15, x is not in the set of natural numbers. For y=16, x is not in the set of natural numbers. For y=17, x is in the set of natural numbers For y=18, x is not in the set of natural numbers. For y=19, x is not in the set of natural numbers. For y= 20, x is in the set of natural numbers. The range R of f is equal to {2,5,8,11,14,17}; y=20 was excluded because it corresponds to the ordered pair (0,20) and 0 is not in the set of natural numbers. Now, six ordered pairs have been determined. 2^n gives the number of subsets of any set, so 2^6=64 different, ordered pairings of the six values in both the domain and the range of f; that is, there are 64 different subsets of A. This can be observed by using once as input per the given equation each x in the domain of f with each y in the range of f.
Answers:I answered #1 in another thread. The core idea is that 10 is congruent to 1 modulo 11, and hence every power of 10 is congruent to 1 or 1. For #3, let the order of x be k. Then x^(k1) = x^(1), and also (xinverse)^(k1) = x. That's enough to show inclusion between the groups both ways, and hence equality. For 4a, if one of the two terms to the kth power equals 1, so does the other one. To see that, just raise the conjugate to the kth power and cancel what you can cancel. It's not clear what #2 means.
Answers:Without the constraints k,j, and m, the answer is: x = 0 ... n y = 0 ... nx z = 0 ... nxy You can take this discrete integral. The number of ways that y+z can add up to a number nx is (nx+1). So we're summing (nx+1) as x goes from 0 to n. That adds up to the sum of 1 through n+1: S = (n+2)(n+1)/2 Now, if you put constraints on x, y, and z, this just changes the limits of this discrete integral. There's no simple way to do this, but if we assume that k+j+m >= n, and that k, j, and m are all less than n, then you get different situations if j+m >=n or j+m < n. If j+m < n, then the number of combinations of y and z will be (j+m+1). Then the number of possibilities for k will be limited to (n(j+m) to n). So the total number of possible combinations will be the sum of 1 through (n(j+m))... this isn't quite right. Working... More later...
From Youtube