Горивата у нас поставиха нов рекорд

  • 20 февруари 2012 19:42

  • 8334
  • 29
Горивата у нас поставиха нов рекорд
©

Цените на горивата у нас поставиха нов рекорд.

Най-масовият бензин А 95 струва вече 2,57 лв. за литър, докато дизелът скочи до 2,70 лв., информира bTV.  Горивата подобриха рекордите си от средата на миналата седмица. Тогава петролът достигна 120 долара за барел и „Лукойл Нефтохим” се принуди да повиши ставките на продуктите си.

Данните на НСИ сочат, че за година литър бензин е скочил с между 10 и 16 стотинки, а дизелът с цели 30. До момента рекордите се държаха от месец февруари 2008 г., когато цените бяха съответно 2,54 и 2,67 лв. Така, на практика се оказва, че у нас купуваме най-скъпия дизел в Европа в сравнение с цената на бензина.

Повишението ще се отрази и вече влияе върху всички сфери на живота.

Шоковият скок в цените удари първо превозвачите. Ще има промяна и в цените на билетите - след като в началото на февруари поскъпнаха с около 5%, очаква се повишение с още около 5% от 1 март.

От 50 камиона на паркинга на една от фирмите за постоянно от движение са спрени 10 машини. „До три месеца, ако горивата се повишат и цените на услугите не се повишат, 50 на 100 от това, което има всичко ще замине, просто хората ще фалират”, прогнозира Георги Стоичков, собственик на транспортна фирма.

Въпреки, че стойностите надминават предишния отрицателен рекорд от 2008 година и търговци, и анализатори предричат нови скокове на цените.

„Цената на бензина за крайния клиент се държи изкуствено по-ниска, отколкото трябва да бъде, а цената на дизела е пазарна - на нива, които са нормални” обясни Станимир Смилков, търговски директор на петролна фирма. По думите му повишение би трябвало да се очаква, защото цената на суровия петрол расте в момента.

„Тази несигурност, която се появява в петролните страни, веднага изстрелва още повече цените нагоре. Това ще подкопае покупателната способност съответно на българските граждани, която и без това е ниска заради кризата,” коментира икономистът Георги Ангелов.

Поскъпването е провокирано от несигурната ситуация около Иран.

Страната е третият по големина износител на петрол в света. Вчера Техеран спря продажбите на суров петрол на британски и френски компании, а тази сутрин петролът достигна рекорд за последните 8 месеца.

Според експерти мярката не е от решаващо значение, тъй като през последните месеци Франция доставя от Иран само три процента от необходимия й петрол. Великобритания пък изобщо е спряла вноса на черно злато от азиатската държава.

Инвеститорите обаче се опасяват за други страни от Евросъюза като Италия, Испания и Гърция, които са далеч по-зависими от иранския петрол.

Последвайте канала на

Десислава Станева
8334 29

Свързани новини

Коментари 29

Добави коментар
Ivan

2012.02.24 | 06:18

13
Kartelite rabotqt za GERB ,zatova se praviat 4e gi niama!
Johny Turchina

2012.02.26 | 09:14

20
Integral From Wikipedia, the free encyclopedia This article is about the concept of integrals in calculus. For the set of numbers, see integer. For other uses, see Integral (disambiguation). A definite integral of a function can be represented as the signed area of the region bounded by its graph. Topics in Calculus Fundamental theorem Limits of functions Continuity Mean value theorem [show]Differential calculus [show]Integral calculus [show]Vector calculus [show]Multivariable calculus Integration is an important concept in mathematics and, together with its inverse, differentiation, is one of the two main operations in calculus. Given a function ƒ of a real variable x and an interval [a, b] of the real line, the definite integral is defined informally to be the net signed area of the region in the xy-plane bounded by the graph of ƒ, the x-axis, and the vertical lines x = a and x = b. The term integral may also refer to the notion of antiderivative, a function F whose derivative is the given function ƒ. In this case, it is called an indefinite integral and is written: The integrals discussed in this article are termed definite integrals. The principles of integration were formulated independently by Isaac Newton and Gottfried Leibniz in the late 17th century. Through the fundamental theorem of calculus, which they independently developed, integration is connected with differentiation: if ƒ is a continuous real-valued function defined on a closed interval [a, b], then, once an antiderivative F of ƒ is known, the definite integral of ƒ over that interval is given by Integrals and derivatives became the basic tools of calculus, with numerous applications in science and engineering. The founders of the calculus thought of the integral as an infinite sum of rectangles of infinitesimal width. A rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a limiting procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. Beginning in the nineteenth century, more sophisticated notions of integrals began to appear, where the type of the function as well as the domain over which the integration is performed has been generalised. A line integral is defined for functions of two or three variables, and the interval of integration [a, b] is replaced by a certain curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space. Integrals of differential forms play a fundamental role in modern differential geometry. These generalizations of integrals first arose from the needs of physics, and they play an important role in the formulation of many physical laws, notably those of electrodynamics. There are many modern concepts of integration, among these, the most common is based on the abstract mathematical theory known as Lebesgue integration, developed by Henri Lebesgue. Contents [hide] 1 History 1.1 Pre-calculus integration 1.2 Newton and Leibniz 1.3 Formalizing integrals 1.4 Notation 2 Terminology and notation 3 Introduction 4 Formal definitions 4.1 Riemann integral 4.2 Lebesgue integral 4.3 Other integrals 5 Properties 5.1 Linearity 5.2 Inequalities for integrals 5.3 Conventions 6 Fundamental theorem of calculus 6.1 Statements of theorems 7 Extensions 7.1 Improper integrals 7.2 Multiple integration 7.3 Line integrals 7.4 Surface integrals 7.5 Integrals of differential forms 7.6 Summations 8 Methods 8.1 Computing integrals 8.2 Symbolic algorithms 8.3 Numerical quadrature 9 Practical applications 10 See also 11 Notes 12 References 13 External links 13.1 Online books [edit]History See also: History of calculus [edit]Pre-calculus integration Integration can be traced as far back as ancient Egypt ca. 1800 BC, with the Moscow Mathematical Papyrus demonstrating knowledge of a formula for the volume of a pyramidal frustum. The first documented systematic technique capable of determining integrals is the method of exhaustion of the ancient Greek astronomer Eudoxus (ca. 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of shapes for which the area or volume was known. This method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. Similar methods were independently developed in China around the 3rd century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere.[1] The next major step in integral calculus came from the Abbasid Caliphate when the 11th century mathematician Ibn al-Haytham (known as Alhazen in Europe) devised what is now known as "Alhazen's problem", which leads to an equation of the fourth degree, in his Book of Optics. While solving this problem, he applied mathematical induction to find the formula for sums of fourth powers, by a method that can be generalized to sums of arbitrary natural powers; then he used this formula to find the volume of a paraboloid (in modern terminology, he integrated a polynomial of degree 4).[2] Some ideas of integral calculus are also found in the Siddhanta Shiromani, a 12th century astronomy text by Indian mathematician Bhāskara II.[citation needed] The next significant advances in integral calculus did not begin to appear until the 16th century. At this time the work of Cavalieri with his method of indivisibles, and work by Fermat, began to lay the foundations of modern calculus, with Cavalieri computing the integrals of xn up to degree n = 9 in Cavalieri's quadrature formula. Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the fundamental theorem of calculus. Wallis generalized Cavalieri's method, computing integrals of x to a general power, including negative powers and fractional powers. At around the same time, there was also a great deal of work being done by Japanese mathematicians, particularly by Seki Kōwa.[3] He made a number of contributions, namely in methods of determining areas of figures using integrals, extending the method of exhaustion. [edit]Newton and Leibniz The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Newton and Leibniz. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Newton and Leibniz developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions within continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz. [edit]Formalizing integrals While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. Bishop Berkeley memorably attacked the vanishing increments used by Newton, calling them "ghosts of departed quantities". Calculus acquired a firmer footing with the development of limits. Integration was first rigorously formalized, using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann integrable on a bounded interval, subsequently more general functions were considered – particularly in the context of Fourier analysis – to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system. [edit]Notation Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with or , which Newton used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted. The modern notation for the indefinite integral was introduced by Gottfried Leibniz in 1675 (Burton 1988, p. 359; Leibniz 1899, p. 154). He adapted the integral symbol, ∫, from the letter ſ (long s), standing for summa (written as ſumma; Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in Mémoires of the French Academy around 1819–20, reprinted in his book of 1822 (Cajori 1929, pp. 249–250; Fourier 1822, §231). [edit]Terminology and notation If a function has an integral, it is said to be integrable. The function for which the integral is calculated is called the integrand. The region over which a function is being integrated is called the domain of integration. Usually this domain will be an interval, in which case it is enough to give the limits of that interval, which are called the limits of integration. If the integral does not have a domain of integration, it is considered indefinite (one with a domain is considered definite). In general, the integrand may be a function of more than one variable, and the domain of integration may be an area, volume, a higher dimensional region, or even an abstract space that does not have a geometric structure in any usual sense (such as a sample space in probability theory). The simplest case, the integral of a real-valued function f of one real variable x on the interval [a, b], is denoted by The ∫ sign represents integration; a and b are the lower limit and upper limit, respectively, of integration, defining the domain of integration; f is the integrand, to be evaluated as x varies over the interval [a,b]; and dx is the variable of integration. In correct mathematical typography, the dx is separated from the integrand by a space (as shown). Some authors use an upright d (that is, dx instead of dx). The variable of integration dx has different interpretations depending on the theory being used. For example, it can be seen as strictly a notation indicating that x is a dummy variable of integration, as a reflection of the weights in the Riemann sum, a measure (in Lebesgue integration and its extensions), an infinitesimal (in non-standard analysis) or as an independent mathematical quantity: a differential form. More complicated cases may vary the notation slightly. In the modern Arabic mathematical notation, which aims at pre-university levels of education in the Arab world and is written from right to left, a reflected integral symbol is used (W3C 2006). [edit]Introduction Integrals appear in many practical situations. If a swimming pool is rectangular with a flat bottom, then from its length, width, and depth we can easily determine the volume of water it can contain (to fill it), the area of its surface (to cover it), and the length of its edge (to rope it). But if it is oval with a rounded bottom, all of these quantities call for integrals. Practical approximations may suffice for such trivial examples, but precision engineering (of any discipline) requires exact and rigorous values for these elements. Approximations to integral of √x from 0 to 1, with ■ 5 right samples (above) and ■ 12 left samples (below) To start off, consider the curve y = f(x) between x = 0 and x = 1, with f(x) = √x. We ask: What is the area under the function f, in the interval from 0 to 1? and call this (yet unknown) area the integral of f. The notation for this integral will be As a first approximation, look at the unit square given by the sides x = 0 to x = 1 and y = f(0) = 0 and y = f(1) = 1. Its area is exactly 1. As it is, the true value of the integral must be somewhat less. Decreasing the width of the approximation rectangles shall give a better result; so cross the interval in five steps, using the approximation points 0, 1⁄5, 2⁄5, and so on to 1. Fit a box for each step using the right end height of each curve piece, thus √1⁄5, √2⁄5, and so on to √1 = 1. Summing the areas of these rectangles, we get a better approximation for the sought integral, namely Notice that we are taking a sum of finitely many function values of f, multiplied with the differences of two subsequent approximation points. We can easily see that the approximation is still too large. Using more steps produces a closer approximation, but will never be exact: replacing the 5 subintervals by twelve as depicted, we will get an approximate value for the area of 0.6203, which is too small. The key idea is the transition from adding finitely many differences of approximation points multiplied by their respective function values to using infinitely many fine, or infinitesimal steps. As for the actual calculation of integrals, the fundamental theorem of calculus, due to Newton and Leibniz, is the fundamental link between the operations of differentiating and integrating. Applied to the square root curve, f(x) = x1/2, it says to look at the antiderivative F(x) = 2⁄3x3/2, and simply take F(1) − F(0), where 0 and 1 are the boundaries of the interval [0,1]. So the exact value of the area under the curve is computed formally as (This is a case of a general rule, that for f(x) = xq, with q ≠ −1, the related function, the so-called antiderivative is F(x) = (xq+1)/(q + 1).) The notation conceives the integral as a weighted sum, denoted by the elongated s, of function values, f(x), multiplied by infinitesimal step widths, the so-called differentials, denoted by dx. The multiplication sign is usually omitted. Historically, after the failure of early efforts to rigorously interpret infinitesimals, Riemann formally defined integrals as a limit of weighted sums, so that the dx suggested the limit of a difference (namely, the interval width). Shortcomings of Riemann's dependence on intervals and continuity motivated newer definitions, especially the Lebesgue integral, which is founded on an ability to extend the idea of "measure" in much more flexible ways. Thus the notation refers to a weighted sum in which the function values are partitioned, with μ measuring the weight to be assigned to each value. Here A denotes the region of integration. Differential geometry, with its "calculus on manifolds", gives the familiar notation yet another interpretation. Now f(x) and dx become a differential form, ω = f(x) dx, a new differential operator d, known as the exterior derivative is introduced, and the fundamental theorem becomes the more general Stokes' theorem, from which Green's theorem, the divergence theorem, and the fundamental theorem of calculus follow. More recently, infinitesimals have reappeared with rigor, through modern innovations such as non-standard analysis. Not only do these methods vindicate the intuitions of the pioneers; they also lead to new mathematics. Although there are differences between these conceptions of integral, there is considerable overlap. Thus, the area of the surface of the oval swimming pool can be handled as a geometric ellipse, a sum of infinitesimals, a Riemann integral, a Lebesgue integral, or as a manifold with a differential form. The calculated result will be the same for all. [edit]Formal definitions There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but also occasionally for pedagogical reasons. The most commonly used definitions of integral are Riemann integrals and Lebesgue integrals. [edit]Riemann integral Main article: Riemann integral Integral approached as Riemann sum based on tagged partition, with irregular sampling positions and widths (max in red). True value is 3.76; estimate is 3.648. The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. Let [a,b] be a closed interval of the real line; then a tagged partition of [a,b] is a finite sequence Riemann sums converging as intervals halve, whether sampled at ■ right, ■ minimum, ■ maximum, or ■ left. This partitions the interval [a,b] into n sub-intervals [xi−1, xi] indexed by i, each of which is "tagged" with a distinguished point ti ∈ [xi−1, xi]. A Riemann sum of a function f with respect to such a tagged partition is defined as thus each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the sub-interval width. Let Δi = xi−xi−1 be the width of sub-interval i; then the mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, maxi=1…n Δi. The Riemann integral of a function f over the interval [a,b] is equal to S if: For all ε > 0 there exists δ > 0 such that, for any tagged partition [a,b] with mesh less than δ, we have When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the Riemann integral and the Darboux integral. [edit]Lebesgue integral Main article: Lebesgue integration Riemann-Darboux's integration (blue) and Lebesgue integration (red). The Riemann integral is not defined for a wide range of functions and situations of importance in applications (and of interest in theory). For example, the Riemann integral can easily integrate density to find the mass of a steel beam, but cannot accommodate a steel ball resting on it. This motivates other definitions, under which a broader assortment of functions are integrable (Rudin 1987). The Lebesgue integral, in particular, achieves great flexibility by directing attention to the weights in the weighted sum. The definition of the Lebesgue integral thus begins with a measure, μ. In the simplest case, the Lebesgue measure μ(A) of an interval A = [a,b] is its width, b − a, so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals. To exploit this flexibility, Lebesgue integrals reverse the approach to the weighted sum. As Folland (1984, p. 56) puts it, "To compute the Riemann integral of f, one partitions the domain [a,b] into subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of f". One common approach first defines the integral of the indicator function of a measurable set A by: . This extends by linearity to a measurable simple function s, which attains only a finite number, n, of distinct non-negative values: (where the image of Ai under the simple function s is the constant value ai). Thus if E is a measurable set one defines Then for any non-negative measurable function f one defines that is, the integral of f is set to be the supremum of all the integrals of simple functions that are less than or equal to f. A general measurable function f is split into its positive and negative values by defining Finally, f is Lebesgue integrable if and then the integral is defined by When the measure space on which the functions are defined is also a locally compact topological space (as is the case with the real numbers R), measures compatible with the topology in a suitable sense (Radon measures, of which the Lebesgue measure is an example) and integral with respect to them can be defined differently, starting from the integrals of continuous functions with compact support. More precisely, the compactly supported functions form a vector space that carries a natural topology, and a (Radon) measure can be defined as any continuous linear functional on this space; the value of a measure at a compactly supported function is then also by definition the integral of the function. One then proceeds to expand the measure (the integral) to more general functions by continuity, and defines the measure of a set as the integral of its indicator function. This is the approach taken by Bourbaki (2004) and a certain number of other authors. For details see Radon measures. [edit]Other integrals Although the Riemann and Lebesgue integrals are the most widely used definitions of the integral, a number of others exist, including: The Riemann–Stieltjes integral, an extension of the Riemann integral. The Lebesgue-Stieltjes integral, further developed by Johann Radon, which generalizes the Riemann–Stieltjes and Lebesgue integrals. The Daniell integral, which subsumes the Lebesgue integral and Lebesgue-Stieltjes integral without the dependence on measures. The Henstock-Kurzweil integral, variously defined by Arnaud Denjoy, Oskar Perron, and (most elegantly, as the gauge integral) Jaroslav Kurzweil, and developed by Ralph Henstock. The Itō integral and Stratonovich integral, which define integration with respect to semimartingales such as Brownian motion. The Young integral, which is a kind of Riemann-Stieltjes integral with respect to certain functions of unbounded variation. The rough path integral defined for functions equipped with some additional "rough path" structure, generalizing stochastic integration against both semimartingales and processes such as the fractional Brownian motion. [edit]Properties [edit]Linearity The collection of Riemann integrable functions on a closed interval [a, b] forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of integration is a linear functional on this vector space. Thus, firstly, the collection of integrable functions is closed under taking linear combinations; and, secondly, the integral of a linear combination is the linear combination of the integrals, Similarly, the set of real-valued Lebesgue integrable functions on a given measure space E with measure μ is closed under taking linear combinations and hence form a vector space, and the Lebesgue integral is a linear functional on this vector space, so that More generally, consider the vector space of all measurable functions on a measure space (E,μ), taking values in a locally compact complete topological vector space V over a locally compact topological field K, f : E → V. Then one may define an abstract integration map assigning to each function f an element of V or the symbol ∞, that is compatible with linear combinations. In this situation the linearity holds for the subspace of functions whose integral is an element of V (i.e. "finite"). The most important special cases arise when K is R, C, or a finite extension of the field Qp of p-adic numbers, and V is a finite-dimensional vector space over K, and when K=C and V is a complex Hilbert space. Linearity, together with some natural continuity properties and normalisation for a certain class of "simple" functions, may be used to give an alternative definition of the integral. This is the approach of Daniell for the case of real-valued functions on a set X, generalized by Nicolas Bourbaki to functions with values in a locally compact topological vector space. See (Hildebrandt 1953) for an axiomatic characterisation of the integral. [edit]Inequalities for integrals A number of general inequalities hold for Riemann-integrable functions defined on a closed and bounded interval [a, b] and can be generalized to other notions of integral (Lebesgue and Daniell). Upper and lower bounds. An integrable function f on [a, b], is necessarily bounded on that interval. Thus there are real numbers m and M so that m ≤ f (x) ≤ M for all x in [a, b]. Since the lower and upper sums of f over [a, b] are therefore bounded by, respectively, m(b − a) and M(b − a), it follows that Inequalities between functions. If f(x) ≤ g(x) for each x in [a, b] then each of the upper and lower sums of f is bounded above by the upper and lower sums, respectively, of g. Thus This is a generalization of the above inequalities, as M(b − a) is the integral of the constant function with value M over [a, b]. In addition, if the inequality between functions is strict, then the inequality between integrals is also strict. That is, if f(x) b: Reversing limits of integration. If a > b then define This, with a = b, implies: Integrals over intervals of length zero. If a is a real number then The first convention is necessary in consideration of taking integrals over subintervals of [a, b]; the second says that an integral taken over a degenerate interval, or a point, should be zero. One reason for the first convention is that the integrability of f on an interval [a, b] implies that f is integrable on any subinterval [c, d], but in particular integrals have the property that: Additivity of integration on intervals. If c is any element of [a, b], then With the first convention the resulting relation is then well-defined for any cyclic permutation of a, b, and c. Instead of viewing the above as conventions, one can also adopt the point of view that integration is performed of differential forms on oriented manifolds only. If M is such an oriented m-dimensional manifold, and M is the same manifold with opposed orientation and ω is an m-form, then one has: These conventions correspond to interpreting the integrand as a differential form, integrated over a chain. In measure theory, by contrast, one interprets the integrand as a function f with respect to a measure μ, and integrates over a subset A, without any notion of orientation; one writes to indicate integration over a subset A. This is a minor distinction in one dimension, but becomes subtler on higher dimensional manifolds; see Differential form: Relation with measures for details. [edit]Fundamental theorem of calculus Main article: Fundamental theorem of calculus The fundamental theorem of calculus is the statement that differentiation and integration are inverse operations: if a continuous function is first integrated and then differentiated, the original function is retrieved. An important consequence, sometimes called the second fundamental theorem of calculus, allows one to compute integrals by using an antiderivative of the function to be integrated. [edit]Statements of theorems Fundamental theorem of calculus. Let ƒ be a continuous real-valued function defined on a closed interval [a, b]. Let F be the function defined, for all x in [a, b], by Then, F is continuous on [a, b], differentiable on the open interval (a, b), and for all x in (a, b). Second fundamental theorem of calculus. Let ƒ be a real-valued function defined on a closed interval [a, b] that admits an antiderivative g on [a, b]. That is, ƒ and g are functions such that for all x in [a, b], If ƒ is integrable on [a, b] then [edit]Extensions [edit]Improper integrals Main article: Improper integral The improper integral has unbounded intervals for both domain and range. A "proper" Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these conditions is not satisfied. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals. If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity. If the integrand is only defined or finite on a half-open interval, for instance (a,b], then again a limit may provide a finite result. That is, the improper integral is the limit of proper integrals as one endpoint of the interval of integration approaches either a specified real number, or ∞, or −∞. In more complicated cases, limits are required at both endpoints, or at interior points. Consider, for example, the function integrated from 0 to ∞ (shown right). At the lower bound, as x goes to 0 the function goes to ∞, and the upper bound is itself ∞, though the function goes to 0. Thus this is a doubly improper integral. Integrated, say, from 1 to 3, an ordinary Riemann sum suffices to produce a result of . To integrate from 1 to ∞, a Riemann sum is not possible. However, any finite upper bound, say t (with t > 1), gives a well-defined result, . This has a finite limit as t goes to infinity, namely . Similarly, the integral from 1⁄3 to 1 allows a Riemann sum as well, coincidentally again producing . Replacing 1⁄3 by an arbitrary positive value s (with s n will always be zero, by the alternating property. In addition to the wedge product, there is also the exterior derivative operator d. This operator maps k-forms to (k+1)-forms. For a k-form ω = f dxa over Rn, we define the action of d by: with extension to general k-forms occurring linearly. This more general approach allows for a more natural coordinate-free approach to integration on manifolds. It also allows for a natural generalisation of the fundamental theorem of calculus, called Stokes' theorem, which we may state as where ω is a general k-form, and ∂Ω denotes the boundary of the region Ω. Thus, in the case that ω is a 0-form and Ω is a closed interval of the real line, this reduces to the fundamental theorem of calculus. In the case that ω is a 1-form and Ω is a two-dimensional region in the plane, the theorem reduces to Green's theorem. Similarly, using 2-forms, and 3-forms and Hodge duality, we can arrive at Stokes' theorem and the divergence theorem. In this way we can see that differential forms provide a powerful unifying view of integration. [edit]Summations The discrete equivalent of integration is summation. Summations and integrals can be put on the same foundations using the theory of Lebesgue integrals or time scale calculus. [edit]Methods [edit]Computing integrals The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. Let f(x) be the function of x to be integrated over a given interval [a, b]. Then, find an antiderivative of f; that is, a function F such that F' = f on the interval. Provided the integrand and integral have no singularities on the path of integration, by the fundamental theorem of calculus, The integral is not actually the antiderivative, but the fundamental theorem provides a way to use antiderivatives to evaluate definite integrals. The most difficult step is usually to find the antiderivative of f. It is rarely possible to glance at a function and write down its antiderivative. More often, it is necessary to use one of the many techniques that have been developed to evaluate integrals. Most of these techniques rewrite one integral as a different one which is hopefully more tractable. Techniques include: Integration by substitution Integration by parts Changing the order of integration Integration by trigonometric substitution Integration by partial fractions Integration by reduction formulae Integration using parametric derivatives Integration using Euler's formula Differentiation under the integral sign Contour integration Alternate methods exist to compute more complex integrals. Many nonelementary integrals can be expanded in a Taylor series and integrated term by term. Occasionally, the resulting infinite series can be summed analytically. The method of convolution using Meijer G-functions can also be used, assuming that the integrand can be written as a product of Meijer G-functions. There are also many less common ways of calculating definite integrals; for instance, Parseval's identity can be used to transform an integral over a rectangular region into an infinite sum. Occasionally, an integral can be evaluated by a trick; for an example of this, see Gaussian integral. Computations of volumes of solids of revolution can usually be done with disk integration or shell integration. Specific results which have been worked out by various techniques are collected in the list of integrals. [edit]Symbolic algorithms Main article: Symbolic integration Many problems in mathematics, physics, and engineering involve integration where an explicit formula for the integral is desired. Extensive tables of integrals have been compiled and published over the years for this purpose. With the spread of computers, many professionals, educators, and students have turned to computer algebra systems that are specifically designed to perform difficult or tedious tasks, including integration. Symbolic integration has been one of the motivations for the development of the first such systems, like Macsyma. A major mathematical difficulty in symbolic integration is that in many cases, a closed formula for the antiderivative of a rather simple-looking function does not exist. For instance, it is known that the antiderivatives of the functions exp ( x2), xx and sin x /x cannot be expressed in the closed form involving only rational and exponential functions, logarithm, trigonometric and inverse trigonometric functions, and the operations of multiplication and composition; in other words, none of the three given functions is integrable in elementary functions, which are the functions which may be built from rational functions, roots of a polynomial, logarithm, and exponential functions. Risch algorithm provides a general criterion to determine whether the antiderivative of an elementary function is elementary, and, if it is, to compute it. Unfortunately, it turns out that functions with closed expressions of antiderivatives are the exception rather than the rule. Consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function. On the positive side, if the 'building blocks' for antiderivatives are fixed in advance, it may be still be possible to decide whether the antiderivative of a given function can be expressed using these blocks and operations of multiplication and composition, and to find the symbolic answer whenever it exists. The Risch algorithm, implemented in Mathematica and other computer algebra systems, does just that for functions and antiderivatives built from rational functions, radicals, logarithm, and exponential functions. Some special integrands occur often enough to warrant special study. In particular, it may be useful to have, in the set of antiderivatives, the special functions of physics (like the Legendre functions, the hypergeometric function, the Gamma function, the Incomplete Gamma function and so on - see Symbolic integration for more details). Extending the Risch's algorithm to include such functions is possible but challenging and has been an active research subject. More recently a new approach has emerged, using D-finite function, which are the solutions of linear differential equations with polynomial coefficients. Most of the elementary and special functions are D-finite and the integral of a D-finite function is also a D-finite function. This provide an algorithm to express the antiderivative of a D-finite function as the solution of a differential equation. This theory allows also to compute a definite integrals of a D-function as the sum of a series given by the first coefficients and an algorithm to compute any coefficient.[4] [edit]Numerical quadrature Main article: numerical integration The integrals encountered in a basic calculus course are deliberately chosen for simplicity; those found in real applications are not always so accommodating. Some integrals cannot be found exactly, some require special functions which themselves are a challenge to compute, and others are so complex that finding the exact answer is too slow. This motivates the study and application of numerical methods for approximating integrals, which today use floating-point arithmetic on digital electronic computers. Many of the ideas arose much earlier, for hand calculations; but the speed of general-purpose computers like the ENIAC created a need for improvements. The goals of numerical integration are accuracy, reliability, efficiency, and generality. Sophisticated methods can vastly outperform a naive method by all four measures (Dahlquist & Björck 2008; Kahaner, Moler & Nash 1989; Stoer & Bulirsch 2002). Consider, for example, the integral which has the exact answer 94⁄25 = 3.76. (In ordinary practice the answer is not known in advance, so an important task — not explored here — is to decide when an approximation is good enough.) A “calculus book” approach divides the integration range into, say, 16 equal pieces, and computes function values. Spaced function values x −2.00 −1.50 −1.00 −0.50 0.00 0.50 1.00 1.50 2.00 f(x) 2.22800 2.45663 2.67200 2.32475 0.64400 −0.92575 −0.94000 −0.16963 0.83600 x −1.75 −1.25 −0.75 −0.25 0.25 0.75 1.25 1.75 f(x) 2.33041 2.58562 2.62934 1.64019 −0.32444 −1.09159 −0.60387 0.31734 Numerical quadrature methods: ■ Rectangle, ■ Trapezoid, ■ Romberg, ■ Gauss Using the left end of each piece, the rectangle method sums 16 function values and multiplies by the step width, h, here 0.25, to get an approximate value of 3.94325 for the integral. The accuracy is not impressive, but calculus formally uses pieces of infinitesimal width, so initially this may seem little cause for concern. Indeed, repeatedly doubling the number of steps eventually produces an approximation of 3.76001. However, 218 pieces are required, a great computational expense for such little accuracy; and a reach for greater accuracy can force steps so small that arithmetic precision becomes an obstacle. A better approach replaces the horizontal tops of the rectangles with slanted tops touching the function at the ends of each piece. This trapezium rule is almost as easy to calculate; it sums all 17 function values, but weights the first and last by one half, and again multiplies by the step width. This immediately improves the approximation to 3.76925, which is noticeably more accurate. Furthermore, only 210 pieces are needed to achieve 3.76000, substantially less computation than the rectangle method for comparable accuracy. Romberg's method builds on the trapezoid method to great effect. First, the step lengths are halved incrementally, giving trapezoid approximations denoted by T(h0), T(h1), and so on, where hk+1 is half of hk. For each new step size, only half the new function values need to be computed; the others carry over from the previous size (as shown in the table above). But the really powerful idea is to interpolate a polynomial through the approximations, and extrapolate to T(0). With this method a numerically exact answer here requires only four pieces (five function values)! The Lagrange polynomial interpolating {hk,T(hk)}k=0…2 = {(4.00,6.128), (2.00,4.352), (1.00,3.908)} is 3.76+0.148h2, producing the extrapolated value 3.76 at h = 0. Gaussian quadrature often requires noticeably less work for superior accuracy. In this example, it can compute the function values at just two x positions, ±2⁄√3, then double each value and sum to get the numerically exact answer. The explanation for this dramatic success lies in error analysis, and a little luck. An n-point Gaussian method is exact for polynomials of degree up to 2n−1. The function in this example is a degree 3 polynomial, plus a term that cancels because the chosen endpoints are symmetric around zero. (Cancellation also benefits the Romberg method.) Shifting the range left a little, so the integral is from −2.25 to 1.75, removes the symmetry. Nevertheless, the trapezoid method is rather slow, the polynomial interpolation method of Romberg is acceptable, and the Gaussian method requires the least work — if the number of points is known in advance. As well, rational interpolation can use the same trapezoid evaluations as the Romberg method to greater effect. Quadrature method cost comparison Method Trapezoid Romberg Rational Gauss Points 1048577 257 129 36 Rel. Err. −5.3×10−13 −6.3×10−15 8.8×10−15 3.1×10−15 Value In practice, each method must use extra evaluations to ensure an error bound on an unknown function; this tends to offset some of the advantage of the pure Gaussian method, and motivates the popular Gauss–Kronrod quadrature formulae. Symmetry can still be exploited by splitting this integral into two ranges, from −2.25 to −1.75 (no symmetry), and from −1.75 to 1.75 (symmetry). More broadly, adaptive quadrature partitions a range into pieces based on function properties, so that data points are concentrated where they are needed most. Simpson's rule, named for Thomas Simpson (1710–1761), uses a parabolic curve to approximate integrals. In many cases, it is more accurate than the trapezoidal rule and others. The rule states that with an error of The computation of higher-dimensional integrals (for example, volume calculations) makes important use of such alternatives as Monte Carlo integration. A calculus text is no substitute for numerical analysis, but the reverse is also true. Even the best adaptive numerical code sometimes requires a user to help with the more demanding integrals. For example, improper integrals may require a change of variable or methods that can avoid infinite function values, and known properties like symmetry and periodicity may provide critical leverage. [edit]Practical applications The time course of drug plasma concentrations over 96 hours following oral administrations every 24 hours. Note that the AUC in steady state equals AUC∞ after the first dose. Area under the curve (abbreviated AUC) is frequently used in pharmacokinetics for functions where the x-axis represents time and the y-axis represents drug concentration. For such functions, the area under the curve usually correlates fairly well with the total effect on the body that the drug will have. In standard use, AUC is defined as either: AUC∞, the integral after a single dose with a hypothetical infinite x-axis AUCτ, the integral in the time interval between doses given regularly, and after having reached steady state. Integral From Wikipedia, the free encyclopedia This article is about the concept of integrals in calculus. For the set of numbers, see integer. For other uses, see Integral (disambiguation). A definite integral of a function can be represented as the signed area of the region bounded by its graph. Topics in Calculus Fundamental theorem Limits of functions Continuity Mean value theorem [show]Differential calculus [show]Integral calculus [show]Vector calculus [show]Multivariable calculus Integration is an important concept in mathematics and, together with its inverse, differentiation, is one of the two main operations in calculus. Given a function ƒ of a real variable x and an interval [a, b] of the real line, the definite integral is defined informally to be the net signed area of the region in the xy-plane bounded by the graph of ƒ, the x-axis, and the vertical lines x = a and x = b. The term integral may also refer to the notion of antiderivative, a function F whose derivative is the given function ƒ. In this case, it is called an indefinite integral and is written: The integrals discussed in this article are termed definite integrals. The principles of integration were formulated independently by Isaac Newton and Gottfried Leibniz in the late 17th century. Through the fundamental theorem of calculus, which they independently developed, integration is connected with differentiation: if ƒ is a continuous real-valued function defined on a closed interval [a, b], then, once an antiderivative F of ƒ is known, the definite integral of ƒ over that interval is given by Integrals and derivatives became the basic tools of calculus, with numerous applications in science and engineering. The founders of the calculus thought of the integral as an infinite sum of rectangles of infinitesimal width. A rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a limiting procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. Beginning in the nineteenth century, more sophisticated notions of integrals began to appear, where the type of the function as well as the domain over which the integration is performed has been generalised. A line integral is defined for functions of two or three variables, and the interval of integration [a, b] is replaced by a certain curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space. Integrals of differential forms play a fundamental role in modern differential geometry. These generalizations of integrals first arose from the needs of physics, and they play an important role in the formulation of many physical laws, notably those of electrodynamics. There are many modern concepts of integration, among these, the most common is based on the abstract mathematical theory known as Lebesgue integration, developed by Henri Lebesgue. Contents [hide] 1 History 1.1 Pre-calculus integration 1.2 Newton and Leibniz 1.3 Formalizing integrals 1.4 Notation 2 Terminology and notation 3 Introduction 4 Formal definitions 4.1 Riemann integral 4.2 Lebesgue integral 4.3 Other integrals 5 Properties 5.1 Linearity 5.2 Inequalities for integrals 5.3 Conventions 6 Fundamental theorem of calculus 6.1 Statements of theorems 7 Extensions 7.1 Improper integrals 7.2 Multiple integration 7.3 Line integrals 7.4 Surface integrals 7.5 Integrals of differential forms 7.6 Summations 8 Methods 8.1 Computing integrals 8.2 Symbolic algorithms 8.3 Numerical quadrature 9 Practical applications 10 See also 11 Notes 12 References 13 External links 13.1 Online books [edit]History See also: History of calculus [edit]Pre-calculus integration Integration can be traced as far back as ancient Egypt ca. 1800 BC, with the Moscow Mathematical Papyrus demonstrating knowledge of a formula for the volume of a pyramidal frustum. The first documented systematic technique capable of determining integrals is the method of exhaustion of the ancient Greek astronomer Eudoxus (ca. 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of shapes for which the area or volume was known. This method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. Similar methods were independently developed in China around the 3rd century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere.[1] The next major step in integral calculus came from the Abbasid Caliphate when the 11th century mathematician Ibn al-Haytham (known as Alhazen in Europe) devised what is now known as "Alhazen's problem", which leads to an equation of the fourth degree, in his Book of Optics. While solving this problem, he applied mathematical induction to find the formula for sums of fourth powers, by a method that can be generalized to sums of arbitrary natural powers; then he used this formula to find the volume of a paraboloid (in modern terminology, he integrated a polynomial of degree 4).[2] Some ideas of integral calculus are also found in the Siddhanta Shiromani, a 12th century astronomy text by Indian mathematician Bhāskara II.[citation needed] The next significant advances in integral calculus did not begin to appear until the 16th century. At this time the work of Cavalieri with his method of indivisibles, and work by Fermat, began to lay the foundations of modern calculus, with Cavalieri computing the integrals of xn up to degree n = 9 in Cavalieri's quadrature formula. Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the fundamental theorem of calculus. Wallis generalized Cavalieri's method, computing integrals of x to a general power, including negative powers and fractional powers. At around the same time, there was also a great deal of work being done by Japanese mathematicians, particularly by Seki Kōwa.[3] He made a number of contributions, namely in methods of determining areas of figures using integrals, extending the method of exhaustion. [edit]Newton and Leibniz The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Newton and Leibniz. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Newton and Leibniz developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions within continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz. [edit]Formalizing integrals While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. Bishop Berkeley memorably attacked the vanishing increments used by Newton, calling them "ghosts of departed quantities". Calculus acquired a firmer footing with the development of limits. Integration was first rigorously formalized, using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann integrable on a bounded interval, subsequently more general functions were considered – particularly in the context of Fourier analysis – to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system. [edit]Notation Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with or , which Newton used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted. The modern notation for the indefinite integral was introduced by Gottfried Leibniz in 1675 (Burton 1988, p. 359; Leibniz 1899, p. 154). He adapted the integral symbol, ∫, from the letter ſ (long s), standing for summa (written as ſumma; Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in Mémoires of the French Academy around 1819–20, reprinted in his book of 1822 (Cajori 1929, pp. 249–250; Fourier 1822, §231). [edit]Terminology and notation If a function has an integral, it is said to be integrable. The function for which the integral is calculated is called the integrand. The region over which a function is being integrated is called the domain of integration. Usually this domain will be an interval, in which case it is enough to give the limits of that interval, which are called the limits of integration. If the integral does not have a domain of integration, it is considered indefinite (one with a domain is considered definite). In general, the integrand may be a function of more than one variable, and the domain of integration may be an area, volume, a higher dimensional region, or even an abstract space that does not have a geometric structure in any usual sense (such as a sample space in probability theory). The simplest case, the integral of a real-valued function f of one real variable x on the interval [a, b], is denoted by The ∫ sign represents integration; a and b are the lower limit and upper limit, respectively, of integration, defining the domain of integration; f is the integrand, to be evaluated as x varies over the interval [a,b]; and dx is the variable of integration. In correct mathematical typography, the dx is separated from the integrand by a space (as shown). Some authors use an upright d (that is, dx instead of dx). The variable of integration dx has different interpretations depending on the theory being used. For example, it can be seen as strictly a notation indicating that x is a dummy variable of integration, as a reflection of the weights in the Riemann sum, a measure (in Lebesgue integration and its extensions), an infinitesimal (in non-standard analysis) or as an independent mathematical quantity: a differential form. More complicated cases may vary the notation slightly. In the modern Arabic mathematical notation, which aims at pre-university levels of education in the Arab world and is written from right to left, a reflected integral symbol is used (W3C 2006). [edit]Introduction Integrals appear in many practical situations. If a swimming pool is rectangular with a flat bottom, then from its length, width, and depth we can easily determine the volume of water it can contain (to fill it), the area of its surface (to cover it), and the length of its edge (to rope it). But if it is oval with a rounded bottom, all of these quantities call for integrals. Practical approximations may suffice for such trivial examples, but precision engineering (of any discipline) requires exact and rigorous values for these elements. Approximations to integral of √x from 0 to 1, with ■ 5 right samples (above) and ■ 12 left samples (below) To start off, consider the curve y = f(x) between x = 0 and x = 1, with f(x) = √x. We ask: What is the area under the function f, in the interval from 0 to 1? and call this (yet unknown) area the integral of f. The notation for this integral will be As a first approximation, look at the unit square given by the sides x = 0 to x = 1 and y = f(0) = 0 and y = f(1) = 1. Its area is exactly 1. As it is, the true value of the integral must be somewhat less. Decreasing the width of the approximation rectangles shall give a better result; so cross the interval in five steps, using the approximation points 0, 1⁄5, 2⁄5, and so on to 1. Fit a box for each step using the right end height of each curve piece, thus √1⁄5, √sdfgIntegral From Wikipedia, the free encyclopedia This article is about the concept of integrals in calculus. For the set of numbers, see integer. For other uses, see Integral (disambiguation). A definite integral of a function can be represented as the signed area of the region bounded by its graph. Topics in Calculus Fundamental theorem Limits of functions Continuity Mean value theorem [show]Differential calculus [show]Integral calculus [show]Vector calculus [show]Multivariable calculus Integration is an important concept in mathematics and, together with its inverse, differentiation, is one of the two main operations in calculus. Given a function ƒ of a real variable x and an interval [a, b] of the real line, the definite integral is defined informally to be the net signed area of the region in the xy-plane bounded by the graph of ƒ, the x-axis, and the vertical lines x = a and x = b. The term integral may also refer to the notion of antiderivative, a function F whose derivative is the given function ƒ. In this case, it is called an indefinite integral and is written: The integrals discussed in this article are termed definite integrals. The principles of integration were formulated independently by Isaac Newton and Gottfried Leibniz in the late 17th century. Through the fundamental theorem of calculus, which they independently developed, integration is connected with differentiation: if ƒ is a continuous real-valued function defined on a closed interval [a, b], then, once an antiderivative F of ƒ is known, the definite integral of ƒ over that interval is given by Integrals and derivatives became the basic tools of calculus, with numerous applications in science and engineering. The founders of the calculus thought of the integral as an infinite sum of rectangles of infinitesimal width. A rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a limiting procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. Beginning in the nineteenth century, more sophisticated notions of integrals began to appear, where the type of the function as well as the domain over which the integration is performed has been generalised. A line integral is defined for functions of two or three variables, and the interval of integration [a, b] is replaced by a certain curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space. Integrals of differential forms play a fundamental role in modern differential geometry. These generalizations of integrals first arose from the needs of physics, and they play an important role in the formulation of many physical laws, notably those of electrodynamics. There are many modern concepts of integration, among these, the most common is based on the abstract mathematical theory known as Lebesgue integration, developed by Henri Lebesgue. Contents [hide] 1 History 1.1 Pre-calculus integration 1.2 Newton and Leibniz 1.3 Formalizing integrals 1.4 Notation 2 Terminology and notation 3 Introduction 4 Formal definitions 4.1 Riemann integral 4.2 Lebesgue integral 4.3 Other integrals 5 Properties 5.1 Linearity 5.2 Inequalities for integrals 5.3 Conventions 6 Fundamental theorem of calculus 6.1 Statements of theorems 7 Extensions 7.1 Improper integrals 7.2 Multiple integration 7.3 Line integrals 7.4 Surface integrals 7.5 Integrals of differential forms 7.6 Summations 8 Methods 8.1 Computing integrals 8.2 Symbolic algorithms 8.3 Numerical quadrature 9 Practical applications 10 See also 11 Notes 12 References 13 External links 13.1 Online books [edit]History See also: History of calculus [edit]Pre-calculus integration Integration can be traced as far back as ancient Egypt ca. 1800 BC, with the Moscow Mathematical Papyrus demonstrating knowledge of a formula for the volume of a pyramidal frustum. The first documented systematic technique capable of determining integrals is the method of exhaustion of the ancient Greek astronomer Eudoxus (ca. 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of shapes for which the area or volume was known. This method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. Similar methods were independently developed in China around the 3rd century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere.[1] The next major step in integral calculus came from the Abbasid Caliphate when the 11th century mathematician Ibn al-Haytham (known as Alhazen in Europe) devised what is now known as "Alhazen's problem", which leads to an equation of the fourth degree, in his Book of Optics. While solving this problem, he applied mathematical induction to find the formula for sums of fourth powers, by a method that can be generalized to sums of arbitrary natural powers; then he used this formula to find the volume of a paraboloid (in modern terminology, he integrated a polynomial of degree 4).[2] Some ideas of integral calculus are also found in the Siddhanta Shiromani, a 12th century astronomy text by Indian mathematician Bhāskara II.[citation needed] The next significant advances in integral calculus did not begin to appear until the 16th century. At this time the work of Cavalieri with his method of indivisibles, and work by Fermat, began to lay the foundations of modern calculus, with Cavalieri computing the integrals of xn up to degree n = 9 in Cavalieri's quadrature formula. Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the fundamental theorem of calculus. Wallis generalized Cavalieri's method, computing integrals of x to a general power, including negative powers and fractional powers. At around the same time, there was also a great deal of work being done by Japanese mathematicians, particularly by Seki Kōwa.[3] He made a number of contributions, namely in methods of determining areas of figures using integrals, extending the method of exhaustion. [edit]Newton and Leibniz The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Newton and Leibniz. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Newton and Leibniz developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions within continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz. [edit]Formalizing integrals While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. Bishop Berkeley memorably attacked the vanishing increments used by Newton, calling them "ghosts of departed quantities". Calculus acquired a firmer footing with the development of limits. Integration was first rigorously formalized, using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann integrable on a bounded interval, subsequently more general functions were considered – particularly in the context of Fourier analysis – to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system. [edit]Notation Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with or , which Newton used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted. The modern notation for the indefinite integral was introduced by Gottfried Leibniz in 1675 (Burton 1988, p. 359; Leibniz 1899, p. 154). He adapted the integral symbol, ∫, from the letter ſ (long s), standing for summa (written as ſumma; Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in Mémoires of the French Academy around 1819–20, reprinted in his book of 1822 (Cajori 1929, pp. 249–250; Fourier 1822, §231). [edit]Terminology and notation If a function has an integral, it is said to be integrable. The function for which the integral is calculated is called the integrand. The region over which a function is being integrated is called the domain of integration. Usually this domain will be an interval, in which case it is enough to give the limits of that interval, which are called the limits of integration. If the integral does not have a domain of integration, it is considered indefinite (one with a domain is considered definite). In general, the integrand may be a function of more than one variable, and the domain of integration may be an arIntegral From Wikipedia, the free encyclopedia This article is about the concept of integrals in calculus. For the set of numbers, see integer. For other uses, see Integral (disambiguation). A definite integral of a function can be represented as the signed area of the region bounded by its graph.

2012.02.24 | 06:20

16
вадя магарето правя 200 овце 100 крави 200кокошки и си гледам живота на село,а петрола да го пие Борисов за айлач.хубаво да не видите там в парламента.
Noblesse

2012.02.24 | 03:40

2
Голяма заблуда е вечно тиражираното в медиите - недостиг на нефт оттам веднага вдигаме цените на крайните продукти... Запаси има достатъчно, но на големите корпорации така им е по-изгодно - да всяват смут и тревоги у хората. Та нали след 1989 г. комунизмът падна и дойде на негово място Mr. capitalism !!! Навсякъде по света големите петролни корпорации като Chevron и други правят непрекъснати сондажи, откриват залежи. Всичко се картографира и записва. Правят се задълбочени анализи на качество, количество и решават от тук ще добиваме от тук не. За какво САЩ водят войни от Втората световна насам ? За горива и оръжие... Така че няма нищо чудно, че за пореден път вдигат цените. На мен съшо не ми харесва да живея в България и да плащам повече за комунални нужди от един среден западноевропеец или американец, но това е истината. Голямата американска икономическа машина работи и винаги постига целта си, независимо от средствата. Та нали те изпращат финансовите си убийци в държави като нашата, приватизират и опустушават всичко по пътя си - EON, EVN, военни бази, ново обурудване за армия и полиция. Накрая България ще има такъв огромен външен дълг, че няма да има с какво да го покрие. За съжаление протестите, които ние обикновените хора правим, не постигат крайната си цел...
Илиев

2012.02.24 | 06:20

17
При нас е така бе хора: вдигат се акцизите вдигат се горивата, спадат акцизите- цените на горивата пак се вдигат, има криза - цените на горивата се вдигат, няма криза - пак се вдигат, цените на нефта се вдигат и цените на горивата се вдигат, спадат цените на нефта - цените на горивата се вдигат. Не чакам изтрясак щото сме в България. Тука няма значение как са хората може от глад да измират да пукнат от студ важното е да не се излагаме пред чужденците. А нали тея дето ни управляват си пишат боноси и си оценяват сами труда на кой му пука. Те са толкова далече от нашето ежедневие, че като ги слушам как тука сме едвали не земен рай и европата ряпа да ядат и да ни гледат нас и да се учат и как ставаме по-добре и ще се оправим и си мисля, че говорят за друга България не за тази в която живеем.Какви протести батенце като гледам как са се развихрили полицаите незнам дали няма да ме бият само дето пиша коментари, знам ли ако ида на протест колко ще ме бият. Излизат някви записи на премиера как покровитества Мишо Бирата, който после гушва букета и нищо повече не се чува по въпроса. Някви хора ги бият за 5 котии цигари без бандерол други щото са със дълга коса и изглеждат подозрително. Толкова лошо оправление не сме имали никога, и чак такива неща никое правителство досега не си беше позволявало. Ама какво да се чудиш те са сичките милиционери и пожарникари. Министър на вътрешните работи и физкултурник представете си колко образовани хора ни управляват. Няма изтрясък батенце щото догодина пак ще ги изберат. Слаб народ слаба държава, високи цени на горивата.

2012.02.24 | 06:21

19
abe kakvo stava tyka v toq kenef be va6ta mama gadobve da eba az rabotim pove4eto za 80/90? litra nafta li be va6ta mama gerberi da eba v ysti4kata
Румен

2012.02.28 | 17:48

24
Айде бе хора какво спиме какво чакаме да стане 3лв ли бензина АЙДЕ ДА ИЗЛИЗАМЕ НА ПРОТЕСТ.В страна като Франция където вземат минимум 2000евро цената на литър е 1.50евро.А в България където вземаме средно 200евро цената е 1.35евро.За кво говорим разлика от 1800евро на месец и само 15цента на литър.Тези управници се подиграват с хората и ние мълчим та мълчим до кога.
Христо

2012.02.28 | 17:49

25
Ти не гледай колко струва бензина там и тук, защото това е неизбежно - цените ще са горе-долу равни. Гледай, че тук заплатите са ниски...
Slow

2012.02.28 | 17:48

23
Ама що да се дигаме се в грешната посока ?НЕ Е проблема в цената на горивото , а в стандарта ти на живот ... Има хора дето им изнася народа да живее като скотове ... Докато ги има такива престъпници да ни лашкат напред-назад, тва е ,не горивото ше дигнат, ми хляба кат стане 3 лева тогава кво ше правим ...
На Рокфеле® сино му™

2012.02.29 | 19:45

28
Имам вече чувството ,че това е някакъв експеримент с нас,колко още можем да издържим?! Протестирахме за АСТА и успяхме,давайте отново и за бенза докато не стане,друг начин няма.Хората в другите държави го знаят!!!Факт.
Male

2012.02.29 | 19:44

27
Кога станаха толкова ве,аз послено зареждах на 1,96. Е сега "БОЯ НАСТАНА".Мамицата ви долна хайдушка аз на вас,червени, сиви ,сини, жълти и там подобни на вас.
Хм

2012.02.29 | 19:44

26
Нямах нищо лично към ГЕРБ - доста бях толерантен, но това, че не предприемат нищо в момента показва, че нямат никаква воля да ни защитават интересите. Само мислят как да се мазнят на ЕС, все едно ЕС ги е избрал, а не българския народ. Тотално безсилие показват... Какво чакат???!!!

Добави коментар

Водещи новини