INSTITUTE OF STATISTICS BOX 5457 STATE COLLEGE STATION RALEIGH. NORTH CAROLINA .. UNIVERSITY OF NORTH CAROLINA De':,a,rtrnent ~)f C~apcl Statistics Hill N. C. SONE REt!ARKS ON A DISTRIBUTIOfl ~CURRIHG IN NEURAL STUDIES " \':fl ter L. Februar~, SIni tb 1964 This research ~ias sUPP'Jrteu b;y the Offie e of lJa\lal .Sesearcr: under contract No. N.::nr-855(09) for research in ~)r6bability anu statistics at the University of NortD Carolina Chapel Hill) N. C. Reproduetior. in ','hole 'Jr in part is permitted for any purpose of the United States Government. lastituteo.f'tf,ti sties 1ltirneo Ser~ 10. 381 , '1' r",'~ SOME REMABKS ON A DISTRIBUTION OCCURRING IN NEURAL STUDIES • l by Walter L. Smith University of North Carolina = = = ;: = ;: = Introduction. does = = = = = = = = = = = = = = = = = = = = = ::: = = = ~ = Suppose that X is a non-negative random variable (which not vanish With probability one) such that finite and, for real positive s) put ~( s/\ ~ log (1 + X) is (! e- sX G = For any c > 0 we shall show there exists a distribution function G(x») of a non-negative random variable, such that s (1) G*(s) = J e- sx dG(X) =e -cj o 1 - cp(z) z dz • o This distribution function G(x) arises in a variety of contexts. The author obtained it many years ago in some unpublished work on the initiation of nerve pulses. It has also arisen in studies of a certain recording apparatus (Takacs, 1955) and of the "present value" of a renewal process (Dall 'Aglio, 1963). More recently it 'Was derived in a colloquium at University College, London) by Dr. J. whether G(x) Keilson, who raised the question of is absolutely continuous and, if so, of how the corresponditJg probability density function behaves near the origin. It is the object of the present paper to prove the following theorem of several parts. ~his research 'WaS supported by the Office of Naval Research under contract No. Nonr-855(09) for research in probability and statistics at the University of North Carolina, Chapel Hill, N. C. Reproduction in whole or in part is permitted for.'any purpose of the United States Government. 2 • F(x) If Theorem 1. is the distribution functi on of j log (1 + x) dF(x) (1.1) function Equation a(x) (1) defines an absolutely contin~~u!. distribution i with a probability density function continuous on the open interval (0, creasing function Dl(x) such that Jl (;) (1.2) ~). a(x) = XC g(x), say, which .is There is a strictly de- Dl(X), and Dl(O+) is finite (in-.addition to (2» if and only if F(x) dx < x o in which case ~ ,> 1 f ! , ~ o then: I < and we assume that X r(l+c) e Furthermore, if F(T) = 0 -c e Jo for same 1 - e- x _ F(x) dx x T> 0 then Dl(X) is constant in (0, T). ! i l, (1.;) There isa,strictly decreasing convex function D (x) such that 2 [1 - a(x)J= XC D (X). If, for some 0 ~Y < 1 , 2 x (4) J [1 - F(y)] dy'" xY L( x) , o where L(x) is a function of slow growth, then , [1 - a(x)] ,., as as x x -> -> , ~ ~ If, however, x (6) J o [1 - F(Y)] dy'" x L(x) , as x -> ~ , 3 • then [1 - G(x)] .... -where M (x) L(z) z J = x and M(x) (1.4) x _> as c M(x) , dz is also a function of slow growth. The continuous probability density functj:on g(x) xCI-C) g(x) = d(x), say, if (3) holds, in which case Moreover =c d(O+) c Y L(x) x 2-y g(x) .... is such that a strictly decreasing function of x, and x g(x) is a function of bounded variation. (8) IX) d(O+) is finite if and only If (4) should hold, then Dl(O+). , as x _> (X) , as x ---:;. IX) while, if :(6) should hold, then g(x) .... c L(x) x (9) (1.5) A ~ 0, \I ~ 0, and for all sUfficiently large x , If, for some A > 0, 1 - F(x) < e- Ax x\l A r(\I+l) then, as x ...> IX) , \I 1 g(x) = ° _e_xp_[ A._x_+_ _\l_~_I-:(::-A_C_)::",,\I_+_I_x_\l_+_l_} } • { , 1: (\1+1) 2 \1+2 x = ° ( e-Ax xAC-l) v= ° To prove Theorem 1 we find it necessary to establish the following ~ three theorems concerning a more general class of density functions. Theorem 2. If a(x) > I ° and a~x) ax = l co Io a(x) l+x a(x) ax, co , <;> and if we write, for tR s = °, J e-sx o then there is a probability density function t co = co ~ 6 (X), say, on (0, ------------.IS aO(z)a 6~ (s) ax < e-sxb.a(x) ax = e such that dz "0 where the contour integral in the exponent is taken along a Theorem 3. co) strai~ht line. In the notation of Theorem 2, if a(x) is ~~ontinuous and of bounded variation then v7e may take 6 (x) as continuous and of bounded a yariation in a~y interval not containing the origin. Moreover, if a(x) < A e·~x for some A > 0, ~ > 0, then 6 (x) = O(e-~x xA- 1 ) • a Theorem 4. If- a ex) and a (x) both satisfy the conditions of Theorem 2 l 2 and if a ex) ~ a (x) for all x and 2 l Jl alex) -a2 (x) o x , . • ,L ax < then, in an obvious extension of notation for all x > 0, and co , , ax e for almost all x. In part of our argument we make use of the continutty theorem for Laplace-Stieltjes transforms. There does not Seem to be any convenient reference for this useful theorem (although its use occurs in the literature from time to time). 'He therefore append a short proof in an appendix. Proof of Theorem 2. Write, for fixed 0> 0 , co = and define go (x) = , 0 for x <0 , a(x) , for x> 0 x I o is a probability density function. Suppose that Zl' Z2' Z3'··· = Then go (x) is an infinite sequence of independent random variables, each governed by the density function go (x) • Suppose M is an integer-valued random variable, independent of the [Z}, such that for n r = 0, 1, 2, ••• . -I e P[M = r} = Define a random variable Y =0 otherwise. o(Io)r r! if M = 0, and Y = Zl + Z2 + ••• + Z-M Then it is an easy matter to see that Y has a distribution function Go(x), say, where (in the notation already suggested in the enunciation of Theorem 2 we have written g~(s) for the ordinary laplace transform of go(x». Thus we have 6 (10) (1 _ e- sx ) x = - S.5 decreases to zero we see from (10) a(x) dx and Beppo Levi's theorem that ---> G*o(s), say, where 00 .)Eo (11) log G(s) o = - Jo dx In view of our hypothesis about the right of (11) a(x} it is clear that the integral on is absolutely convergent. Also, from Lebesgue's theorem G* (s) -...> 1 as s decreases o It follows therefore, from the continuity on dominated convergence, we can deduce that through real values to zero. theorem for Laplace-Stieltjes transforms, that there is a distribution function G (x) o over (0, co) such that 00 Furthermore 1 by Fubini 1 s Theorem , 00 Jo -sx, ( ~l_._,~e_·,..!. x 00 a(x) dx = lJ Jo J0 ( ) dzdx e -zx ax Thus the theorem will be proved if we ShO'Vl that continuous. (12) G (x) is absolutely o To this end, we differentiate (11) and find that d - -d s * G (s) 0 7 ~(x) Hence, if is defined by x ~(x) it is a consequence of = Jo (12) a(x - z) d G (z) o that co " -sx ( Jo e x d G x) J = 0 e- sx f(x) dx o • From (13) we infer that, except for a possible discontinuity at the origin, G (x) is absolutely continuous "\lith a density function o .fu2 x Finally, v1e rule out the possibility of a point mass of probability at the origin by observing that its l-reight must equal (taking the limit through real values) co -j lim e s -> = e o a(x) dx x co Which is zero, by our hypothesis that the integral d.iverges. Thus the theorem is proved. Proof of Theorem 3. for fj a (x) The continuity and bounded variation properties claimed are easy consequences of the representation x fj a ex) = ~ Jo a(x - z) d G (z) o 8 From this equation 'We· also see that if' a(x) < A e-11 x then -Ax (14) Therefore x ~ log t e1'1z 6a (z) d.z < -Ax and so log Thus A log :It: x J e1'1Z 6a (z) d.z < o and hence, from (14) again, 'We have 1 eT1x 6 (x) a < A xA- 1 J e'P;Z 6a (z) dz , o 'Which completes the proof of the theorem. Proof of Theorem 4. We extend the notation used in the earlier proofs, with the aid of suffices, in an obvious way. Except for the convergence of the integral j1 a 1 (x) ; a 2 (x) dx o the non-negative function a 1 (x) - a (x) satisfies all the conditions im2 posed upon a(x) in Theorem 2. Thus we can say there is a distribution function H(x), say, of a non-negative random variable, such that s -j e (16) 9 (a~(z) a~(z)} - 0 dz = H*(s) • Because of the convergence of (15) it Will be seen from the proof of Theorem 2 that the function H(x) will have a discontinuity at the origin, but will otherwise be absolutely continuous. From (16) it follows that x Go1(x) S = Go2 (x - z) d H(z) o and therefore as claimed. Let us now write x g[2](x) 5 J g&(x-z) g5(Z) dz = 0 , and, for n>2 x g[n}(x) 0 i = g~n-l](x_z) g5(z) dz Then, from the proof of Theorem 2, it is evident that Gol(x) -I at the origin of amount e 51 , but is absolutely continuous otherwise, and, for almost all x > 0 , = has a jump E n=l g[n] ex) 51 • 10 But, by our hypothesis, for all x. Thus, if 0 Hence <a <~ , 5 decrease to zero we find that If we now let , that is ~ J Ct -J al (x) dx;: e lI 0 Since the last inequality holds for arbitrary a and ~ (> 0), the final contention of Theorem 4 is proved. Proof of Theorem 1, Part (1.1). It is clear that the integral Jl 1; F(x) dx o diverges, and an integration by parts will show that 00 00 J log (1 + x) ClF(x) = o Thus, if we put a(x) = conditions of Theorem 2. c{~ - F(x») Jo 1 - F(x) 1 + x dx • then this function satisfies all the Upon noting that 11 co S zx e- {l - 1 - ep(z) F(x)} dx = ep(z) dz Z o we can therefore infer that o -c Z o e i~ 1 - S indeed, the Laplace-Stieltjes transform of an absolutely continuous distribution function G(x). Furthermore, if we write function corresponding to G(x) g(x) for a density then we may put x g(x) = ~ So {l - F(x - z)} g(z) dz x = c G(x) x c x The distribution function G{x) x S S G(x - z) dF(z) o is continuous and therefore G(x - z) dF(z) o is also a continuous function of x. Equation (17) therefore shows to be continuous as claimed. Proof of Part (1.2). From (17) we see that x d dx G(x) = XC G(x) - fo xC G(x-z) dF(z) so that x (18) c "" - x (l+c) S o G(x-z) dF(z) g(x) 12 ~he right-hand side of {18} is negatiVe. therefore x" C G(x) :::; D (X), 1 say, is a decreasing function as was to be proved. Suppose that creases to zero. increases to a finite limit Dl(X) A, say, as x de_ Then, by a fam11iar Abelian theorem for laplace..Stieltjes ~e c!"{s) --> A r(l transforms (Hidder, 1941, p. 181) (through real values). + c) as s_> tm Therefore s clog s S c , log [A r(l + e)] o that is s c as .f s ---> . co s C ---> log [A r(l d.z + c)] But, by Fubini's theorem , cp(z) Jl Jo. l."zcp(z) 1 cp{z) d.z z z dz = s o:l l J0 o:l J :;; e- zx F(x) dx dz e -x .. e-sx F(x) dx x 0 ide can thence deduce from Beppo Levits theorem that 00 o:l J 1 cp(z) z dz e"x F(x) x = J I1hen A < 00 0 From all this we may conclude that F(x)/x belongs to dx Ll(O, is finite if and only if A 1), as is claimed in this part of the theorem. happens to be finite vle see that 1 o:l log [A r(1 + c)]:;; c J o dx .. r c Jo 1 .. cp(z) z dz r , 13 But co J (1; = x e- ) [1 - F(x)] dx , o and so co log [A r(l + c)] = S ~ - e-: - - c F(x) dx, o which proves the value for D (0+ ) • 1 This completes the proof of Part Proof of Part (1.3). ~ (19) [x· c also note that should F(x) = 0 c x· G(x) (18), for all x < T then, by vIe is constant for all x < T. (1.2). x > 0 , From (17) we have, for t1 - G(x)}] = - x (l+C) t1 - K(x)} where K(X) is the absolutely continuous distribution function x (20) From K(x) = S F(x - z) dG(z) o (19) it is apparent that x- c {1 • G(x)} = D2 (X) ,say, is a de- creasing function with an increasing derivative; in particular, D (X) 2 is convex. Now suppose that for some 0 ~ Y:5 1 and some function of slow growth L(x) x J [1 - F(y)]dy.... y x L(x) , as x -> co • o Then, by a slightly more complicated Abelian theorem than the one we have already used (Doetsch, 1950, p. ), we have that 14 .1...,:, r(l + y) L(~) cp(s) s as s -> 0+ that as sY through real values. However, we can discover from (1) s --> 0+ * 1 - G(s) - ~ s 1 - p(z) c s s z r(l+y) dz dz. Before we can proceed we must discover the asymptotic behavior of the integral on the right. By an obvious change of variable we have 0:> s L(z·l) 1 L(u/s) du • 1 dz = s 0 zY sY u2-y - - J I Now Karamata (1930) has shO"l'1!l that for a given function of slow growth L(x) as there is necessarily a function x -> 00 p(x) such that p(x) ---> 1 and L(x) _ p(x) x From this fact it is an easy deduction that for arbitrary L(u/s) L(s·l) o < for all sufficiently large € > 0 < u and all sufficiently small s. if Y < 1, we can appeal to dominated convergence to infer that Hm s -> 0+ J 1 du 2-y u = 1 (l-y) Therefore, L(8· 1 ) - 1 8 sY(l-y) Hence * 1 • G (s) c 8 as s -> 0+. 1950, p. 511) r(l + y) L(~·l) , (l-y) sy From a Tauberian theorem for Laplace transforms (Doetsch, we can then deduce that x J (I .. c L(x) xY G(y)}dy - , (1 .. YJ o x -> as 00 • Furthennore, from (20) , , so that, as s -> 0+ , 1 • K* (8) S - 1 .. ce(s) + s * 1 .. G(s) s c r(l+y) L(8- 1) (l-y)sY Hence, by another Tauberian argument , x Jr (1 .. {I .. K(;y-)}dy - c)L(x) xY (1 - y) o If we multiply (19) by Y+ x(l - G(x)} as x -> 00 )l+C) and integrate by parts we find that x (21) , = (1 + c) J (1 o x - G(y)}dy .. c J 0 [1 .. K(y)} dy • 16 From the asymptotic results we have obtained it follows from (21) that 1 - G(x) c y L(x) (l-y) xl-V N In the case y = 1 , as x -> <Xl • we cannot employ the dominated convergence ment and the results come out somewhat differently • argu~ Let us define <Xl M(x) L(z) z dz Jx L(az) z dz Sx = Then for any fixed a>O <Xl M(ax) = and hence, for an arbitrary (1 - e) S x L~Z) dz € > 0 and all sufficiently large x, L~Z) S < M(a x) < (1 +€) x It is obvious therefore that M(a x) M(x) N is consequently a function of slow growth. as x dz --> <Xl and that M(x) We thus obtain for this case 1 - G*(s) s . 1 as s -> 0+ , and so, via the iauberian theorem, x Jo {I - G(y)}dy N C We shall show in a moment that x M(x) , as L(x)jM(x) -> 0 then follows, as before, that *(s) 1 - K + s c Mes';'l} s x -> as <Xl x -> <Xl. It 17 and so , x S (1 - K(Y)}dy - c x M(x) • o From (21) we can then deduce that (1 - G(x)} C N , M(x) as x -> ex> , which was to be proved. To see that L(x)jM(x) -> 0 as x -> ex> we observe that for 6. arbitrarily large and positive x6. > M(x) Jx L(z) z dz 6. = Thus r J. L(ux) du u 6. M(x) > 1m S1 (L(UX)} udu L(u) and so, by a dominated convergence which can be justified much as before , lim inf x -> ex> M(~J L{X) 6. > S 1 -du u This establishes the correctness of our assertion. Proof of Part (1.4). (22) so that x g(x) x g(x) By (17) and (20) we have = c G(x) - c K(x) is of bounded variation as claimed. If we differentiate this Jast equation (and write k(x) for the, necessarily continuous, densit,y function associated with K(x» we find that 18 = -c k(X) x g'(x) + (1 - c) g(x) which implies that ~ [X(l-C) g(x)] = c k(x) c x Therefore x(l-C) g(x) = d(x), say, where function. = If d ( 0+ ) andtherefore G(x) > 00 d(x) then, given any large xCfj./c , is a strictly decreasing we have fj. g( x ) > for all sufficiently small x. x -(l-c) Hence only if D (0+) = 00 • On the other hand, if d( 0+ ) l c it is clear that, for small x, G(x) _ d(O+) x Ie, so that d( 0+) = fj. is finite 00 = c Dl(O+) and, incidentally, Dl(O+) is sean to be finite. d(O+) To complete the proof of this part we need the following x Lemma 1. If I {l - F(y)Jdy ,.. xYL(x) as x.....:> 00 , where o and L(x) is a function of slow growth, then sY cpl(S) - L(s·l) .. Proof. o:s Y ~ 1 We note that, as x x --.:> 00 as "I), s -> 0+ , x ~ J{ I o 1 x1+"1 - > (1.. y) r (1 + 1 [1. F(Z)]dz} dy- xl +y 0 Io yY L(y) dy ,.. L(x) "{+l by Theoreme 1 of Karamata (1930, p. 40). But an integration by parts shows x 1 1+"1 x S y{l .. F(y)} dy o x = Jo {l .. F(y)}dy - l~ x "I x y 1{~f[l-f(z)]dz }dy 0 19 and hence we have x J y {1 - F(y)}dy y xy+1 L(x) y+l N o The Laplace transform of xtl - F(x») 1 - ep(s) 2 , as x -> co is + s s and so, by the Abelian theorem for Laplace transforms, as 1 - ep(s) s 2 - cpr(s) + • s y r(y f s -> 0+ , 2) L(s·l) (y + 1) s Y + 1 But - 1 - ep(s) s x from the hypothesis S r (1 + y) L( s -1) , sY {1. F(y)}dy - xY L(x) • Thus o sy cpt(s) L(s·l) -> (1 - y) r( 1 + y) as claimed. Returning to the proof of Part (1.4), let us define x ~(x) = So g(x - z) z d F(z) Then rex) ax = . cpr(s) G*(5) and so, under the conditions of Lemma 1 , as Therefore, by the Tauberian theorem we have been using 5 , -> 0+ 20 x 1 x'Y L(x) J r(y) dy -> (1.. 'Y) , as x -> 00 • 0 If we convolute both sides of (22) with F(x) we find (23) x k(x) where H(x) - = rex) c K(x) - c H(x) is the distribution function x H(x) = J K(x .. z) dF(z) o On integrating (23) we obtain x (24) S -x{l - K{x)} + x S (1 - K(y»)dy - o r(y) dy 0 x = c J (K(y) - H(y)} dy • o From (21), (22), and (24) we then find x2 g(x) c x = (1 + c) ~ {G(Y)" K(y)} dy f r(y) dy o x .. c J {K(y) - H(y)} dy • o The function (G(y) - K(Y)} is non-negative and its Laplace transform is easily seen to be 1 - cp(s) s reI G*(s) + y) L(s-l) sy as s -> 0+ • 21 x Thus I (G(y) - K(y»)dy"" , xY L(x) as x -> 00 • o Similarly, the non-negative function and so (K(y) - Hey)) has La.place transform. xY L(x) , as x l'o (K(y) - H(y») dy... x -> 00 , also. We now have enough asymptotic results to deduce from (25) that x 2 g(x) c ..., i.e. Y Y x L(X) 2- y L(x~ 2.'1( x This completes the proof of this part. ..., g(x) Proof of Part 1.5. Lemma 2. If flex), as x ->00 We begin first with f 2 (x), f3 (x) that are bounded integrable functions such x ll(x) and if, for , = So f 2 (x - z) f3 (z) dz x > 0 , , 22 t' or some Proof. d > '" > 0, J.l A > e, ~ ~ 0, 1 > 0: > 0, then For any m > 0 r axl 0: Ax e (m + xl Axo: e (m + x)'P } = f ~ Ao: 1 - 0: X l J1 (m + x) so that we can always choose m large enough to make Axo: f(x) = _e_........(m + x)~ an increasing function of x > stants o. . Having chosen m we can then find con- N , N , such that l 2 < for all x. Thus x f 1 (x) :5 Nl N2 Jo e-J.lX + J.lZ - ~~ + A~ • 0: (m + zl x Jo in view of the fact that f(x) increases. e -(J.l - "')(x - z) dz, Thus the lemma is proved. That part of (1.5) concerning the case Theorem 3. ~ =° is already covered by We shall therefore assume from here on that C > 0, there are constants 1 - ~ > 0, and suppose > 0 such that ~ , F(x) < x > 6 • Define c Max Then e(x) > F(x)} c{l - sufficiently small x. ( for all e(x) - J o and we can deduce from Theorem g(x) x J and e(x) = c{l F(x)} - for all Therefore I (26) I II - F(x), = c{l x F(x)1 dx < <Xl 4 that , 0(6 (x» e for almost all x. Define Ci(X) = ~(x) = C e-/I.x x~ r(v+l) and Then ~(x) -> 0 Moreover, since Theorem 3 that and ~(x) ~(x) = e(x) - Ci(X) ° for all = O(e-ilx ) x> 6. Hence for arbit.rarily large ~ ~ (x) r), is defined. it follows from 24 (:. (x) (27) = 'T O( e -:l x ) for 11 arbitrarily large. He als 0 note that x (28) (:. (x) = (:. (x - z) b. (z) dz 'T cr e ~ For typographic ease, let us v~ite ]; (~) 2 \>+2 x Then, in view of (26), (27), (28), and Lemma 2, we shall have proved Theorem (1.5) if we show that one of estimating b. (x) say that (29) Our task thus becomes From all that we have proved so far we can . cr Acr(X) = O(e- Yx ) for any Y < 'A.. Thus we can deduce from of Widder (1941, p. 66) that b. (x) cr = 1 lim 21fi T ->c:o CC -y+iT sx + \>(s+'A.)\> e J -y-iT Let us put h(B) = sx + Cc Then Cc so that = O(~(x». is continuous and locally of bounded variation; also, cr from Theorem 3, Theorem 7.3 b. (x). b.cr(x) hr(s) =0 where d.s • 25 s = -/\ + = -/\ + 6(X) , say , and this is a point on the real axis a little to the right of the point s = -/\. Choose an arbitrarily small As the real parameter t > 0 • runs from = -/\ s € + -€ o(x) _ t 2 + it runs along a small parabolic arc C ,say. s on NOvl CC(\I +1) h"{s) Thus, for all to +E 6(x) the point 6(X) = C (s+/\) (\1+2) we have , where K is some constant which does not depend on 6 and E, provided l , they are both small. Therefore, if s is any point on h"(s) where = s* is some point on (: between sand -/\ + 6. Hence, on s: i 26 where < K2 Ipl(t)I not depend on (j{ 0;( s) or E = -Ax K being some further constant which does 2 On we thus have € /) , r: /). Cc -+ + x/)+ \,)0'J = -Ax Cc + x/)+ \,) where Ip2(t) I < K 3 Hence, noting that € \,)- /) 0 , for some constant Cc (1 + 2€ o)e If we substitute for K not depending on 3 € or = (i - 2t) dt, we have ds -Ax+xo+ - ~ CC(\,)+1)t 2 2 0(\,)+2) \,)0 /) \,) +e:/) S e in terms of x in the last inequality, we dis- cover S eh(s)ds = O(*(x» , as x -> (X) • ~ Let T be a large pos i tive number) 11 a small one, and le t £. (T) be the line mapped out by as t runs from 0 to T. Notice that £(T) is a straight line segment sloping away from the imaginary axis and linking up with one end of t: . /) • 27 On the line ::- (T) + - Cc \)r\) vThere r 2 = 62[(1 - € 222 2 &) + e } Thus 1 \I at h( s) < -Ax \)+1 - + xV+I (C)\>'+1 c - 'IIxt 'I + \) ,say V , where \) 2 v=x/)-e: 2 &x+ -Cc \I xV+l - \1+1 \) \)r 1 (Cc.)\)+l & in terms of x we find On substituting for \I x (\)+IT w , say , where 1 --\I w 2 = 1 - e /) + = - (21 Hence there is a R h(s) < v1 [(1 2 + /) - 0 ) e 2 222 e/» -K on £: (T) T J e-n,xt dt 0 we have I J eh(s)ds I ~ .1", e2 \I ds = (1 - lI)dt < < \) (4 \I 1 -..-Ax + (\1+1) x\l+l (Cc)\l+l ( 1 - Thus, noting that \)+1 + 0 e ) > 0 such that w < K 2 2 + e } -'Tlx 1 , and we see that on -,2 ~) \1+1 , - llxt and that . ,Z (T) 28 Hence) x _> as , m and this result is uniform in T. Lastly, consider the straight line segment ~ (T), say, which is parallel to the real axis and mapped out by s as t = runs from 0 (-A + C - e to Tl T. 2 52 - t) + iCE 0 + T) .t} (T) we have On for some K which is independent of T provided it is sufficiently large, 4 and of e and 0 provided they are both IJ eh(S)ds I .~ (T) -AX + < -x = o(1\1(x) ) e small~' 2 xo - € , as Thus 2 0 X x -> m , uniformly in T.. In ( 29) we may suppose that correctly if e is small enough. =A+ y e 22 Combining noting especially the uniformity of 0 - 0, for then 0" (x) ;: < A (29), (;0), and (31), and (30) and (;1) With respect to T, we can now easily prove /:, y O(V(x», This completes the proof of the theorem. as x -> m • 29 Appendix .... -a Let (Fn (x)} be an infinite sequence of distribution functions of non-negative random variables and, for real = J s ~ 0, let e- sx d.F (x) n n = 1, 2, ••• 0- be the corresponding Laplace-Stieltjes transforms. Suppose F(x) is a further distribution function of a non-negative random variable and that Fn (x) -> F(x) as n -> 0) , at every continuity point of F(x). Then, by dominated convergence, I as e- SX Fn(X) clx - > n -> co, for every fixed real I e- F(x) clx Hence F*(s) -> F*(s) n s > O. * n -> co, for every s ~ 0 (for Fn(O) SX = F*(0) = 1 On the other hand, suppose Fn(s) * ...> ~(s), as n --> co; suppose further that origin. ~(s) as for all n). for every real s ~ 0 , is continuous to the right at the By the usual 'Helly-Bray compactness argument there is a bounded non-decreasing function M(x), say, and a subsequence {F (x)} such that nm Moreover, we can Fn (x) -> M(x) at every continuity point of M(x). m take M(x) = 0 for x < o. By the dominated convergence argument already used we see F* (s) ---> M*. (s) and so M* (s) n * But Fn(O) =1 m for all n and so * ~ (0) = 1. = i *(s) for all real s > O. However, i2 *(s) is continuous to the right at the origin and hence M*(0+)=1. ,This proves that M(x) is a distribution function, and indeed, the unique distribution with LaplaceStieltjes transform M*(s) = i2(s). By familiar reasoning it now follows that Fn(X) -> M(x) at every continuity point of M(x). 30 REFERENCES G. Dall'Aglio (1963), Present Value of a Renewal Process, Institute of Statistics Mimeo Series No. 366, Chapel Hill, N. C. G. Doetsch (1950), Handbuch der Laplace-Trans-fornation} Vol. I, Verlag Birkhauser, Basel. J. Karamata (1930), Sur un mode de croissance reguliere des fonctions, Mathematics (Cluj), ~ , 38-53. L. Takacs (1955), On stochastic processes connected with certain physical recording apparatuses, Acta Math. Hun~, §, 363-380. D. V. Widder (1941), The Laplace Transform. Princeton University Press.
© Copyright 2025 Paperzz