To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Spatially offset Raman spectroscopy

From Wikipedia, the free encyclopedia

Spatially offset Raman spectroscopy (SORS)[1] is a variant of Raman spectroscopy that allows highly accurate chemical analysis of objects beneath obscuring surfaces, such as tissue, coatings and bottles. Examples of uses include analysis of: bone beneath skin,[2] tablets inside plastic bottles,[3] explosives inside containers[4] and counterfeit tablets inside blister packs. There have also been advancements in the development of deep non-invasive medical diagnosis using SORS with the hopes of being able to detect breast tumors.

Intensity distribution of the diffused light when one select only the part coming out in reflection at a distance from the source.

Raman spectroscopy relies on inelastic scattering events of monochromatic light to produce a spectrum characteristic of a sample. The technique usually uses the red-shifted photons produced by monochromatic light losing energy to a vibrational motion within a molecule. The shift in colour and the probability of inelastic scatter is characteristic of the molecule that scatters the photon. A molecule may produce over 10 to 20 major lines, though this is restricted only by the number of bonds and symmetry constraints. Importantly, the spectrum produced by a mixture forms a linear combination of the component spectra, enabling relative chemical content to be determined in a simple spectroscopic measurement using chemometric analysis.

YouTube Encyclopedic

  • 1/3
    Views:
    2 743
    2 885
    711
  • UXSS 2014: Coherent X-ray Scattering at Ultrafast Timescales
  • UXSS 2014: Soft X-ray General and Solid State Aspects
  • nanoHUB-U Fundamentals of AFM L3.5: AFM-The Instrument - Contact Mode Scans

Transcription

[MUSIC] Stanford University. >> All right, good morning. So basically, today, the keyword is going to be coherent. So that's what mostly I'm going to try to tell you about, because that's sort of my background. and, while people are coming in, I'll mention first of all it's difficult to follow the previous lecture was a Nobel Prize winner right, so I have to, so I have to kind of keep up to that. One thing I wanted to mention is that I was actually here, I believe this was first, was this the first one? The first Ultrafast X-ray summer school in 2007, I was a post-doc then but I was actually a participant there there seven years ago. I think iPhone just came out so every, all the students were talking about that. So that, that should give you a, a, sort of a time scale for, for things. And that was a lot of fun. And that's when I actually started learning about Ultrafast X-ray science. And one sort of more philosophical note I was trying to look up who this quote belongs to. It's difficult to make predictions, especially about the future. Some people say it's Niels Bohr and some people say Yogi Bera. I'm sure if you Google enough, you'll find it's Mark Twain. Mark Twain said everything. What I wanted to say basically here is that we probably still, I would argue that we still probably don't know what the best use for XFELs is. We know that some experiments are perhaps more successful than others, but it's still a very new type of machine and perhaps it's up to you guys to figure out what is it going to be useful for. One example I wanted to use is that in my field coherent x-ray science is now quite common place that most of the third generation synchrotrons some of the techniques I'll, I'll show you today based on sort of this observation of this X-ray speckles but if you look at the scientific case for justification for the third generation synchrotrons, it was not even mentioned there. So all the things that I'm doing now, or a lot of other scientists do now with third generation synchrotrons was not anticipated until, the sources were built, and so you can flip it around around and ask yourself, what are we missing now, what are the crucial experiments, or the crucial type of measurements that we could do at XFELs, I assume most of, this school is about XFELs, that we're not thinking about now. So I want you guys to think a little bit outside of the box. I'll show you what, is possible, but try to think, you know, for yourself. And I think it requires new people, coming in from completely fresh background to restart innovating in that direction, in new directions. All right? A lot of us started doing synchro, synchrotron work, in answer of expanding into XFELs but, I think it needs, you need some complete, you know, slate, blank slate minds to to the approach, XFEL is a different kind of machine, that's what I'm trying to say. Okay, so this is my talk promel you can see all these beautiful pictures if you go around the guesthouse, or if you go on the slack website and all of them basically show the X-ray speckle, of the fringes, that have to do with the spatial coherence of the source. And my talk today, is going to be to try to convince you, that, this X-ray Speckle could be very useful. That XFEL they have basically two main differences from the traditional sources. Not only that the, pulse is ultra fast, but also that this is essentially almost fully spatially transversely coherent source. And the fact that it is ultra fast has essentially been used in almost every experiment but the fact that it is spatially coherent has not been used in as many experiments as I think it should be and so I, I'd like you to start thinking about what is possible. What can we use the speckle for? Why should we care about the fact that it is spatially coherent? So why do we want to do ultra-fast measurement? This is sort of my take. A lot of times you want to freeze time, right? So if you want to take a snapshot of some process that is very fast like chemical reaction or if you want to do, let's say probe and destroy single molecule imaging, you need to have a camera that can basically, you need to be able to take a snapshot faster than the process happens. Otherwise you'll smear everything out. You know, e, everybody who's doing photography knows that if, if you have a child running around and you have a long slow shot you will not, you will get a blurry image. But you can still sort of argue that we can approach this question from the point of view of technology and ask yourself how quickly we can do something, like switching an auto magnet or having a light switch or something like that. Or, you can ask yourself as a physicist, a smart physicist can ask, how quickly can you drive a phase transition? So, from chemical, biological, or physical or maybe even technological point of view, you have very different types of questions that we ask. Another sort of angle is that we can induce non-equilibrium type states that that create phases that may not be possible at equilibrium, where you allow everything to equilibrate, so that's I think an interesting angle. And another angle that's sort of related to that is that a lot of times especially in condensed metaphysics you have complex coupling interplay between many degrees of freedom in order to unravel them and figure out who's the driver, who's the passenger. You can do this by using ultra-fast excitations. The response time for light, speed and, and charge degrees of freedom could be very different. typically, we think of [UNKNOWN] as slow charges faster. Although, we find more and more examples as we discussed this morning. [UNKNOWN] Actually, that is not sometimes the case, but that could be another sort of tool in order to study systems that're, that have this [UNKNOWN]. Okay so kind of going back you've probably seen this picture somewhere, maybe in this, in this slide, that idea is basically to go back to this ultra-fast regime and find out what is technology, what is the materials, properties what is face pins, how to, how can we control phase transitions in this very fast time scales and it's very small land scales. This is why x-rays are good at seeing things that are below this this land scale, below essentially the wave of invisible light and ultrafast word here describes what happens at this ultrafast time scales, let's say picosecond and, and below. And so this is sort of illustrating again, my point that, as a commencement of physicists I like to think of most of the harkeness motor physics has some sort of complex interplay over these degrees of freedom charge. Spin lattice sometimes people add orbital, again, going to ultra-fast regime can allow us to actually figure out what, what are the errors connecting these these different degrees of freedom are. And why is this important? Well, many of there [UNKNOWN] phases, they basically arise from that competition between a frustration between many degrees of freedom and that creates super connectivity magnetism insulated metallic transitions and so forth. So basically almost every electronical or magnetic phase that you can imagine results, as, as a due to the competition between charge orbital spin and lateral degrees of freedom. We often see competition that, that results in frustration for example. As the result of that, you also have some spatially inhomogeneous distribution. In other words, these phases that I'm, that I'm talking about are very often very patchy. And they have nano scale, what is known as nano scale phase separation, that is, if you think about how this is superconductors, or manganites, or metal insulated transition in, in a simple, in simple oxides, very often they're characterized by this texture, nanoscale texture. And that's going to be important too. So not only, we want to look maybe at temporal response of some of the systems, but also we have to keep in mind that a lot of them have spatial inhomogeneity and that's where coherence could play a key role in trying to understand what happens here. And I'll come back to that slide a little bit later. Okay, so this is a slide I like to show if you think about how fast the CPU's has grown over, over the lifetime, over basically, you know. People, people's lifetime over, you know, 60 years. They've grown by about 12 orders of magnitude. So everything that we have at our disposals. You know? Our iPhones actually have more computing power than supercomputers 15 years ago. And that's all a result of this, what is known as Moore's Law. The fact that you have a dabbling of, of, CPU speed every, every couple of years. But if you look at the similar growth in terms of the brightness of extra sources, we're going faster. We're going about 18 orders of magnitude in just one person's lifetime. That I think is quite a remarkable achievement. It also explains why I think a lot of us are struggling to analyze our data, because the computers don't, don't keep up with, with the amount of data we can generate in in sources like, like LCLS. Another type of, and so a lot of times people like to compare when you, when in fact in this slide you see I'll see a lesson, an APS sort of one which is this traditional storage between a lot of us older people have, you know, still using I guess third generation synchrotrons. And lot of times people say, what is the difference? What is the comparison? And I would argue that this is a terrible thing to do, to try to compare the two. They're very different animals. Right? It's like comparing sharks to elephants or something like that. They have, they've never met, they don't know about each other. There is no, nothing similar about them. There is very little overlap between the two, I would argue. But let's do this anyway, so let's, basically what I would like to do is this is more traditional synchrotron storage ring, like advanced photon source near Chicago. Some of you may know, or may have been there, and, you know, there are other sources like that, and this is all CLS work, guys next door. So if you compare the one slide, that of course you know, people from Stanford like to show, is this one. This shows, big spectral brightness, and we'll talk about this in a second. Instead of the average brightness and in Pittsburgh's spectral brightness the jump between let say this, this aps esrf type storage ring is about ten orders of magnitude. If you look at these numbers it goes from 23 here to 33 or something like that, that's a huge jump and there's only a se, a 100 times increase in terms of the, the actual spectral brightness. In fact if you look at the average number of photons coming out every second, if you're sitting at the beam light at a storage ring let's say PS or you can see them at the b line here at LCSL number of protons coming out and you detect it per second, averaged over a long time, it's going to be roughly the same. You know, give or take an order of magnitude. And so it's interesting that, first of all, even though the number of photons that flux is about the same, the spectral brightness here of slack is about 100 times better. And that has to do with the fact that it's fully spatially coherent, whereas APS is not. And that is taken into account here in the brightness calculation. But if you, if you divided, as the peak value, it goes up by, you know, it's about ten orders of magnitude higher. So what does this ten orders of magnitude higher fact come, come about? Why's it so different? Well it's because it's a very different type of storage. So first of all think about what else has changed by a factor of ten to the ten in ten years. Right this is basically the slide shows you that it, you know, in ten years we went by about ten orders of magnitude in terms of, if you're interested in, in peak spectral brightness. And we'll talk about what it means in a second. It increased by ten to the tenth. I don, I couldn't come up with anything else that has changed, you know, that much in technology or you know at any given time I think in human history. I don't think, I couldn't, I couldn't think of anything, maybe some of you. Does anybody know what has changed in ten to the tenth in ten years, in human history? I don't know if it's, maybe the amount of data we'll generate on YouTube or something like that [INAUDIBLE]. But basically the reason is the following. That the storage rings typically operate, they are very rapid machines. They operate at about ten megahertz, lets say range is you know, up to 100 megahertz for some machines, lets take ten megahertz as sort of the bench mark where as XFEL is 100 megahertz, so you have the same rhythm, you have the same number of photons coming out per second. But but storage rings has about ten to the five more pulses, that means that each pulse packs ten to the five more photons at LCLS, as compared to let's say APS. That's a big number already, and then pulse duration again, if you think about typical storage ring, about 100 picoseconds at XFEL so about 100 femto-seconds. It's about 1,000 times shorter pulse. That means if you divide, if you're interested in the peak value, you're going to divide by another factor of ten to the three. So it means that you squeeze all your photons, not only have ten to the five more photons per pulse, you also squeeze your photons in a 1,000 times shorter duration. And that really helps you to understand how different XFEL fields are compared to the storage rings. That's only ten to the eight, so what is another factor of 100? This missing factor of 100 basically comes from the fact that LCLS and other sources like that are going to be, are essentially fully spatially coherent. That means that the refraction limits distortions. You can also think about it if you've taken an EMM class recently. It's a plane wave. Where's at sources like a APS, the wave is more chao, chaotic. And if you look at the wave front it's sort of all warped and, and and curved in, in places. So to select a single flat section of the wave front at storage ring you have to throw away about a factor of 100 or a 1000 in some cases. It depends a little bit on energy, and other parameters to get to the essentially fully spatial coherent source. Okay, so this is now the slide I'll, I'll flash this very quickly. I made this number kind of almost the same, even though there's a little bit of fudging going on here. But it can take about, total number of X-rays per second, and LCLS and APS is about the same. But number of pulses are different and X-rays, number of X-rays per pulses is also very different, you can also think about traditional storage rings as basically almost quasi CW CW stands for continuous wave, we have about nanojoule per pulse, but you have nanoseconds between the pulses, we have many many and the, the duration is picoseconds, so you have almost you know, if you zoom out and squint your eyes a little bit you can almost argue that this is a flat, continuous source of photons that come out as soon as you open the, the, the shutter they come out like water out of the faucet basically, all the time. Whereas at LC, at LCLS you have milijoules instead of nanojoules and millijoules in some case even more, and you have milliseconds in, from one pulse to the next and each pulse is about femtoseconds instead of picoseconds. That helps you to understand that I think these are two very different types of sources, and it's almost impossible to even come up with an experiment that is, you want to do at one machine, but you could also do it at another machine. Basically the, there's very, two different types of science, I would argue that the two sources can access. So what is this brilliance that we are talking about? So let me say, let me give you a bit, bit of a primer. I'm not sure exactly maybe some of you already know about this maybe somebody talked to you about this but the main idea is that what is interesting about when you talk about the property of the source, we define something as known as emittance. And emittance basically is the size of the, of the, of the source. Times the divergence, the angle of divergence. And that product is called emittance and that product is basically more or less conserved, if you use normal objects. That means that, you can increase the vergence, if for example if you focus the beam to smaller spot size, you also have to. Increase the diversion, so that the product stays the same. And so that's a very useful thing to know, because once you define this emittance, this product will, divergence and the size of the beam, any optical manipulation of the beam, does not affect that product. If you have to, if you want to make the beam bigger, you can decrease the divergence and vice versa. And, essentially, there is a lot of formulas here that I am just going to basically not talk about, because I wanted you guys to capture the main physics. But the main idea is that, you want to make this number as small as possible. You want to make the, the product of divergence times the size, and there's actually two products. One is in one dimension, and the other one another dimension, and that defines, essentially how, close your source is, to the fraction limited source. You want to make that as small as possible, but the product cannot be as small as possible, there is still there's still a uncertainty principle that tells you that it is a quantum mechanical limit to that product, right? It's momentum times, you know, delta p delta x is equal to h bar, basically the same argument. And then you can define the brightness. The brightness that in all these curves that I showed you. And, you know, people sometimes use brilliance, I think it comes from German word, means exactly the same thing. So, a lot of people get very confused when I hear brilliance and brightness, and they don't know what the differences are. It's basically the same term. With the total flux, how many photons do you have per second? But it's normalized to this product of the dimensions of the beam in x and y, and the divergence of the beam in x and y. To the total emittance phase. With some pre-factor, 4 pi squared in this case. The main idea is that if you can make emittance, smaller you can, you can increase the brackets. So, one of the reasons why LCLS is bright in addition to the fact that, it has short pulses and, and has more photons per, per pulse, is that it also has very low emittance. It's basically, almost diffraction limited in many, for many extra wavelengths it is diffraction limited. And the reason for, you know, and this is sort of a very basic slide that you probably, I'll try to skip over that very quickly, but the main idea is that, this is a basic explanation of why, how you generate those, those, those x-rays. Essentially, it's, it's a very long undulator. At APS at storage units we don't use very long undulators, and the source. Of the source size is basically limited by expansion of the, of the bunch of electrons, as they go around there many many times and so, that's one of the reasons why the so, the, the, the source, the electron bunch, is much bigger than you can create in a linear, storage in a, in a linear accelerator like [INAUDIBLE]. And, in this case, you have all of the electrons essentially radiating coherently, and that's one of the reasons why you have such, such high brightness in the source. Sort of another way of thinking about this channel of magnitude, increases, that, in X-ray free electron laser, if you think about, sort of this is progression of, of different types of sources from the bending magnets, to something called Wiggler which is roughly you can think about this is first generation, second generation, third generation, synchotron undulator. Undulator, the idea is that it can, it can make more magnets in your scale, the brightness scale is number of magnets squared, but it's still times one current, but in x-ray free electron laser, the brightness is going to be proportional to, currents squared rather than the currents which are squared, and that's why you get such a huge boost in terms of your brightness. That's another way to sort of think about the scaling parameters that make X-ray free electron laser so much brighter than than our traditional storaging. Okay, one, one consequence of this brightness. So, who care about this brightness argument though, who cares about these emittance? One consequence of the emmittance, is that if you want to focus your beam, and lets say that you have a beam that is partially incoherent or partially coherent. Which you can think about as many, many different volumes that each of them is individually coherent within itself but it is little phase relationship from one part of the, of the beam to the other, when you try to focus that beam and lets say you have a perfect optic somehow. Which is often, often not the case, but if you had a perfect optic it would still not focus to the point, you would have, you would basically just image that source. And the source is large, you would have an image here that is also pretty large. And so, a lot of people, what they do is, they say, well, I'm going to make my beam smaller, and I'm going to create virtual source, and I can't focus it to smaller size. But if you have a low emittance which means that you can have large beam but very small divergence, almost parallel beam, then you can focus it to, essentially a point. And because the [UNKNOWN] the, the emittance is limited, by uncertainty principle, you will have a, and if you focus something that is defraction limited to begin with, you're going to get down to basically a fraction of the valent, if you had a perfect optic to do that. Of course not, not, not all objects is perfect, we're still, in many cases, limited by the objects but it's still, if you want to do any kind of imaging with, with lenses, you want your source to be a defraction limited as possible, and that's another, important thing to keep in mind that LCLS is basically defraction limited [INAUDIBLE] that is if you wanted to focus it you can get to as small size as the optics allows you. So the the coherence, now I'm going to talk mostly about spatial coherence. Spatial coherence means that if I walk across the wave from here and ask myself normal to the direction of propagations, this is wave going this way and if I walk normal. Transversive to the direction of propagation ask myself, if I know the phase at one point, how is that related to the phase somewhere else? Interest for direction, there's going to be something called coherence length that tells you how this, this, how far you have to walk in this direction before the phases basically become uncorrelated. There's also something known as, longitudinal coherence, which is basically the same question but working along direction of the, of the propagation of the wave, how far the phase is correlated if you, if you walk longitudinally. And, I'm going to skip through some of this math, but you can actually derive this, this is simple geometry, if you have the size of the source. Sigma X and Sigma Y, this Sigma is supposed to be Y by the way. Then you can define coherence length as something that is related to the, goes as wavelength, divided by the angle and dimension of the source. So, if you are distance R from the source this, this is going to be a coherence length. That's going to be defined by how far I have to walk across in this directions, along the X and Y direction. Before there, addition of the phases, from two different points here. Sigma x apart. We're going to start adding somehow out of this. And you can do a simple calculation. It's simple geometry, of looking at what's the, Aflan distance, if I walk along this direction, from two different points here at the source. And so essentially, you can derive this, what is known as coherence lens and you can select what is known as coherent volume which is the dimensions of sigma x by sigma y. And at the LCLS your beam is almost fully spatially coherent, your coherence length are essentially almost the size of the beam for most cases. But in many cases, it's not the case and so if you, calculate now coherent flux which is the flux in the coherent volume. It's going to scale, its brightness. And so everything in it, makes a very important connection. So, when we talk about brightness, this curves that I show you, brightness of brilliance, where brightness was increasing. Everybody wants to have as bright source as possible. If you increase your brightness for the same wavelength, you automatically increase the amount of coherence flux that you have. That is, if I select a single coherent volume, single mode, coherent mode in my beam, my, the number of photons that I can work with, will scale with, with br, linear with brightness. If I make, if I increase brightness by a factor of, 100, I automatically get 100 times more photons okay, so, I wanted to talk about now. This is sort of, the primer about coherence, and we're going to talk about some, something a little bit more. But I wanted to highlight two specific techniques that I actually used here at LCLS, and, the techniques that I use in my lab, in my group, x-ray photon correlation spectroscopy, which allows you to look at, couple to nano scale fluctuations. I'll try to explain to you what that means, in a bit more detail. Another technique which is Coherent X-ray Diffractive Imaging. It has sometimes several different names, lens [UNKNOWN] imaging you may have heard about that. And the idea here, is to do lens to do real-space microscopy, based on the phase retrieval, based on essentially reconstructing the speckle patterns that I showed you earlier, speckle patterns like that. Okay, so first, let's try to, you know, let's try to get you guys, waking up a little bit. So this is the, laser speckle. I actually took this picture in my office. So, if you take the laser and just shine it through something that is a little bit opaque, like this cup, I would get a speckle. So, those of you who are sitting very close. In the front row you can actually see this is the speckle pattern right? So, this is the, manifestation of the fact that you have defacts here, some roughness of the, of the plastic and as the beam goes through, it scatters from all these imperfections at random amplitudes and phases it adds up to this, speckle pattern you see here. So the question I wanted to ask to the students. Not the faculty, to the students is, when, when do you think was, the first time that the, the visible light speckle was observed? Just decade roughly. >> [CROSSTALK] So, what is it, speak up, I can't. [INAUDIBLE] Lasers, so what, when was the laser invented then? >> Laser, like the 50s right? >> 50s or 60s. Yes, 1960, definitely. So a lot of people think 1960 because they think that you need a laser to do this. It turns out that it's wrong. And the answer is actually about, almost 100 years before that. So, the first speckle was, was detected by Karl Exner. He was using, using candlelight. I'm going to come back to, explaining why that is. But, you know, it was so, such ancient times, he didn't have actually photographic technology so he had to sketch it in a notebook. So, make good sketches because someday, you know, 150 years from now somebody's going to show your notebook in this, in this light. So, he sketched this, this speckle. It's probably other people have seen this too but they just didn't make a good recording of that, then Von Laue. The same Von Laue who got a Nobel prize for x-ray, work, has been also using arc discharge lamp to produce this beautiful speckles from some powders. [INAUDIBLE] so this is kind of weird right, so how did they get, all this, we know that you need coherent light. To produce a speckle, produces interference patterns,so how is it possible they were using candle light, when no candle light is very incoherent light source. I'll explain this in a second. But does anybody know the answer? Maybe, maybe somebody knows the answer. Yes, how? >> Small angle. >> Small angle. Yes. Something like that. So they basically, they can filter the lights. I'll, I'll, show you the slide which explain this. But in fact the, the previous, in fact you could argue that speckles were observed even before that. So, when you guys were like two years old and you look up at the stars, and you see twinkling of the starlight, that's basically a speckle experiment. Because what happens is, the light is so far away, the angular dimensions are so small, that the coherence length is large enough, to interfere across the atmospheric density fluctuations that happen in, in Earth. And so then, your eye becomes a point detector that sees bright and dark fringes passing through it. The speckle pattern changes at, as the atmosphere changes around. And so that's why the, the, the, the stars actually twinkle. And the planets do not. And so, that's how you can tell actually. The planets are much more, are much bigger in angular dimensions then the coherence length becomes much shorter. And so, this is a big nuisance for, the spec lens is a big nuisance for astronomers. Because when they want to detect the, you know, planet around the star, to [INAUDIBLE] you know, to find what other earths are there. They always have this speckle around it, unless they go to space or use some sort of [INAUDIBLE] techniques, correction techniques, to, to correct all this distortions, caused by atmosphere, so it's kind of ironic that for astronomers, this is a big nuisance for astrophysicists this is a great tool. Right? So, we sort of turning things around. So, in fact you know, Newton has been actually talking about this this twinkling of the stars. And he actually figured out, it basically has to do with the speckle, basically the speckle phenomenon. Okay, so how do you, so let's go back, so how do you, how did this guy, this Exner and Von Laue, how were they able to create a coherent light source out of incoherent, you know, light bulbs and, and, and candles. Well, the idea that if you make it spatially coherent. If you, if you create a virtual source by let's say putting a filter here and selecting and so remember that, when we talk about incoherent it just means that the coherence lamp is very tiny, right. The wave front, is not a plane wave front but it has many, many distortions. But if you put a little filter, if you put a pinhole, in a screen, it will select a, a region of your wave front that can be approximated as almost the plain wave, and far enough away you can also think about it as, you created a virtual source far enough away from the virtual source, the spherical wave here looks almost like a plain wave that you'd expect from the laser. Right. And and one trick is, of course, not only have, you have to make it spatially coherent, which is what I mostly talk about, but you have to make it temporally coherent, meaning that you, you want to select a single wavelength. In fact, one of the reasons why Exner's speckles were all stretched out, is that he had actually many different colors. He filtered it spatially, but he did not filter candle light, temporally, so he had, all the different colors created in speckles at different angles, and that's why his, his pictures were all like stretched out speckles. But he could see the colors, everywhere basically. If he used some sort of prism or some sort of other temporal filtering techniques to select a single wavelength, then he would have seen much nicer speckles similar to the one that he see with a laser. So basically, if you combine the two, if you spatially and temporarily filter your light, you pay a huge price by throwing away the low photons, but it can create a light that far enough away looks like a, like a coherent light source. And that's basically how the first x-ray speckle, was observed, you know, probably before many of you guys were born, in 1991. By Mark Sutton at [UNKNOWN], which is the second generation syncrogen, not even the third one. In fact, many people did not believe that it's possible. They thought it was, he's looking at some grains and, in some. In the mater, in the metal or something like that. It's not, it's, they thought it was attached and not a speckle. Okay, let me kind of switch gears a little bit and, and tell you something else that was on my mind a lot, so in [INAUDIBLE] of Physics actually polled a number of readers about what's the most beautiful physics experiment of all time. And so this are the submissions. Number one was, Young's double-slit experiment applied to interference of single electrons. And it turns out that, you know, we all read, and you can read some of the other ones. They're all sort of classic experiments, that are really beautiful and really fantastic in many ways. But this is the experiment where if you remember reading Feynman's Lecture of Physics, where you can send one electron, to detract through through two slits, and you're looking at interference pattern behind it and you see that interference pattern, looks like its a wave rather than a particle. And the real sort of twist that Feyman likes to give is that, even if you sent one electron at a time. Right, if you send many electrons at the same time, you could argue that they still somehow bounce from each other or, or interfere with each other. But you can even send one electron at a time, and the pattern is consistent with, what would happen if an electron went through both slits at the same time. Right? And if you try to sort of, try to figure out which slit it, it came through by doing an additional observation, you actually ruin that. That interference, right. So the observe, the, you, you change the experiment by, by modifying it. So the idea of this experiment in the fireman's analogy was, thought experiment, that if you send any particle, could be light particle but in, in most cases people like to use electrons. Electrons basically think of with. About as almost as tennis ball. It manages to go through both slits at the same time, and create a pattern that has these fringes instead of just a shadow mask of this object. And so that's a quantum mechanical, it confirms quantum mechanical nature of, of an electron or a light, for that matter. And so, it turns out that, if you look at the history that this experiment, with the single electron, sending one electron at a time and collecting this data actually it is not been done much later after, after [UNKNOWN] talked about this experiment, people have done other variations, variations experiment that indicated that that would be the case but he actually he haven't done that experiment he describes in, in lectures. And so this is the experiment, it was done only in 1989, way after Feynman was talking about this. This is, these are the many fringes that people observed, in the selection diffraction experiment by making the electrons go into two slits, and sending one electron time when they would see that, they would collect them in this strange, like pattern. And so that, why am I talking about this double slit experiment, well because the speckle experiment is basically the exactly the same experiment with light. For the most part, we send one photon at most, per pulse that interference with complex sample that we put in our, in our beam, and so that one photon interferes with itself. Throughout the entire sample and forms this speckled pattern. So essentially we build it up at, what, one, one photon at a time, one photon per pulse or zero photon per pulse in most cases. And so, you can think about this as, as basically a very fine point in mechanics, you know, millions of times a seconds and while, while you're collecting this speckled pattern it, it. Normal storage rates, also that's not going to be the case because you're going to have many, many more coherent photons per pulse, that you going to, that are getting attacked again. Okay so let's talk about xpcs, what is this XPCS X-ray photon correlation spectroscopy, I'll introduce this technique rather quickly, if you know. About light scatter and dynamic light scatter and that's basically the same technique but taken to x-rays and given a slightly different name. The idea is that if you have a variation of, combination of particles, they'll all scatter contribute to the far field speckle pattern, this interference pattern that creates, created by this many different particles. If the particles start moving around, then the interference pattern will change, right? So this is just like, if I, if I shine my, if I start moving my, glass, through here, I'll change the diffraction pattern, I'll change the speckled distribution, because the relative phases and intensities, added from, different scatters will change. So think of this as not a double slit experiment, but you know, 100 slit experiment, 100 pinhole experiment. And if I change the positions of all the spin holes, I'm going to change the speckle pattern. I'm going to change the, the same rate as I'm changing the cell. So therefore, if I simply look at state of one pixel here, and I look at the fluctuations of intensities, I can, I can correlate them in time, and that will tell me, the, the correlation function here means that at very short times. The intensity does not change, and it's correlated with itself, if the delay between the two measurements is very short. But if I change the delay to be a long times, the intensity of one time is no, no longer correlated with the intensity of some time later when the delay is very long. This decay of the correlation function tells you that the system changes, and de-correlates in real space. Add the time scale, and if you, since you can measure that at different wave factors you can learn about how the system is correlated across many different spatial frequencies. So for example, in, in brown and diffusion case you can actually measure the fact that, that displacement is going to be scale and square root of time because you can look at different Q. And figure out the time is going to go as Q squared. That you measured here. So this is a very powerful thing because it can measure all the different Q's at the same time by simply correlating the speckles. So this is one, one such quick sort of example. You could calculate something known as intensity, intensity auto-correlation function. If you know what auto-correlation function is great, if not, that's fine you can basically just think about it physically as how long does it take before the time before the system changes to a new configuration. But you can define it mathematically like that, and you will see that auto-correlation function will decay and will the time in which it decays, its exponents. the, the exponent in time will change as a, as a function of [UNKNOWN] in most systems. That means that, the system sort of [UNKNOWN] different land scales of different times. And it is all a bit of, of you know, internal sort of baseball talk, inside talk about speckle contrast [UNKNOWN] exponents that. You don't have to worry about it too much in this slide, but if you interested to learn more you can ask me ask me later about that. >> So >> Yeah. >> Like if >> Yeah. >> I, I can read the Qs from here, but like what could be quickest? >> Well so usually, usually the, usually the so if you think about looking in the system in the microscope in your little space right? You would see that the, at short distances it will get here the fastest, right, and then if you zoom out and look at sort of low resolution image it would take longer to [UNKNOWN] so usually the, the small distance which is the large Q, will have the fastest dynamics. In fact, I don't know if I have this I don't think I have it in this, but I have some movies where you can actually see it in, in. By looking at it by eye, you can kind of judge by it. So the typical experiment is, is the way we do this in storage is very simple. We just take snapshots at different times, and then we correlate intensity in the same pixel with itself sometime later, and we calculate our correlation function. And you can see by eye, you can see that they become decorrelated. But this is very slow time scales. In fact, this is basically in order to see this by eye, I had to, I had, we're looking at a very slow response in minutes and sometimes even in hours. Here is one example we can see by eye a little bit better. So, as, as we change the temperature, this is a magnetic system that has a spiral like structure and by looking at the speckle patterns. As we approach essentially a phase transition, you can see that speckles here sort of almost don't fluctuate, and then speckles here be, begin to fluctuate quite a bit. And this is more or less real time moving in a helical [UNKNOWN] system. Right, so, if you, if you, if you simply stare at. Some region here you'll see the speckles change very rapidly, whereas maybe in small Q they don't change just as, as quickly. Maybe that's not the best example. So this has been looked, this, this technique has been applied to a number of different systems, mostly very slow systems, and I'll, I'll try to explain basically glassy dynamics. But we will, people have looked at a number of hard condensed matter systems, spin, charge, ordered states. Soft matter, that's where you have a lot of glassy systems. If you mix colloidal particles, you can, you can basically make your dynamics go very slow, and it's perfect technique to start that at very small-length scales. There has been applications in biology, and of course material science, and so on. But if you look at sort of where in terms of the time, or, or frequency, or energy if you will versus the wave vector. Where this technique has been the most successful. This is essentially S of Q omega map. Right? So when, when we look at different processes, we always. In, in Physics if we used to like to describe it as S of Q on the go which is basically the lens scale goes into Q the times scale goes into frequency. These are nano scale, but very slow process is measured so far. So anything that is, larger you can measure with various light scattering technique either a dynamic light scattering or may be. IF you want to go faster in elastic techniques like Raman and Brillouin, [UNKNOWN] in, or you can also use an elastic neutron, X-Ray [UNKNOWN] or spin echo to measure faster processes. in, if you want to have a high spatial resolution, but not that, that you cannot get with light. Yes. >> [INAUDIBLE] like, surface measurements? >> In this case? >> Yeah, like how deep? >> So basically you're looking at whatever x-rays, so that's another, whatever the x-ray penetration depth is, you're going to be look at more and less, most cases you're going to be looking at the entire. You know, penetrating volume, all right? So in this case, for example, are you talking about this particular system or what? >> Yes, yes. >> So in this particular system there's a thin film, so we're looking at basically these people look at, at the whole, the whole film. I'm forgetting now exactly what the thickness was but basically tens of nanometers or something like that perhaps. Yeah and that could be a good a good point to make because with visible light you, you will not be able to penetrate through the whole system. So in this experiment we are looking about, micron, if I, if I remember correctly, on that order, penetration depth. So we are looking at the mains fluctuating within the first micron on the surface, which is dark to most people. But you know, some mutual people would say its a steel surface. But it, it depends basically on how much your energy, your x-ray energy and the penetration in that particular system. In some other cases in fact, in the previous system this is we're looking at surface fluctuation, we're interested in surface dynamics,. In that case, all the signal comes from distortions of the surface waves in all of our systems for example. You can optimize the geometry and you can optimize the, the extra energy and other things to, to tune that to you know, to measure exactly what you're up to. [SOUND] In fact I would argue that in, in, oops. I would argue that in in some systems you want, you may want to go away from. So in, in this particular system it's actually kind of surprising that the spins fluctuate so rapidly given that it's thin film system, surfaces themselves are big defects and they tend to pin fluctuation, so you, you really need to go to the bulk type of systems in order to study that properly. Okay, so the point, the point that I wanted to make here is that actually is if you look at most of the systems that have been started, the fastest ones sorry, the large SQ ones are usually the slowest ones, and these are the ones around [UNKNOWN] peaks. These are typically liquid crystals. The fastest ones system that we, that people have studied with XPCS and there is a whole bunch of soft matter systems. Polymers and, and colotal systems that have been studied. But they're all very slow, right? So that's the main point I wanted to make here. This is, we're talking about ultrafast, this is ultra slow processes, right? These are seconds and, and slower. Or miliseconds and slower. So the question that we would like to sort of pose is, can we actually bridge the gap and use XPCS on this region, where [UNKNOWN] operates? Can we measure things that are nanoseconds, picoseconds, or maybe even femtoseconds with XPCS, and then we can overlap in some ways within the [UNKNOWN] scatter. And one, maybe I'll skip this slide briefly for, in the interest of time. But one interesting thing about XPCS, as you go faster, is that the signal to noise for XPCS goes as total flux, but times square root of the time that you measure. In other words, if you and I'll come back to explain this in a little bit more detail. But it means that if you increase the brightness of your source. You can go quadratic with half string time. Not linearly, not square root, but quadratically. So if, if somebody gives you N times more photons, in terms of the brightness of your source, you can measure N squared times faster than X. So bridging those orders of magnitude in temporal resolution may not be as crazy as some people think. so, there, there is a setup here there is XPCS [INAUDIBLE] LCLS and idea, the idea people have proposed. And I, I should say that right now it has not been hugely successful, and I'll, I'll say this because I think maybe it's up to some of you guys to actually figure out how to do this properly. But the idea is the following. You want to have, you want to send. You want to send a pulse, ideally you want to send two pulses and you will correlate what happens with one pulse and what happens with the second pulse in XPCS. And so, you start with a single pulse with the split and delay line, so that you create, essentially, two pulses that all comes from the same original pulse. And by, by changing the, the delay between the two pulses, you can probe dynamics at the time scales that corresponds to the time delay between the pulses. That was the original idea, and if you work in sort of, if you come from the optics background, you will say, well this is sort of trivial. You, any undergrad in the lab can just build a bunch of mirrors. And, you can build that split and delay line in, in, in an hour, right? But if you come from the x-ray community, you'll know that aligning one crystal to get a [UNKNOWN] is quite challenging. So, a lot of my work used to be done on studying liquid surfaces where you have to tilt the beam, down to the horizontal surface and we used something known as steering crystal and to align that takes. Forever, you know, it takes a day sometimes to align it properly and make sure that it steers over the whole range of, range of angles but in this case in this geometry the way it's shown you have six crystals plus all this, well actually it's eight crystals if you count all this, all this crystals that have to split it halfway and recombine the beam again. And as it becomes very challenging, also the energy fluctuates or the intensity fluctuates, the whole system becomes becomes out of alignment. So for now this is still sort of a work in progress. Yes. >> Some sort of a [INAUDIBLE]. >> I'm sorry. >> Is the [CROSSTALK]? >> The [UNKNOWN] is a, is, you can think about it as a very thin [UNKNOWN] crystal that wants to. Defract, some of the beam but transmit some of the other beam. them do but [UNKNOWN]. >> I cannot hear you. >> [UNKNOWN]. >> No they are basically the same color. In terms of wave of energy. Let's do it to, to illustrate here that these are two different pulses, one is, ahead of the other one. Sorry, the color here does not mean to illustrate the different wavelengths. In fact, actually there are setups where you can change, we can offset the, most likely in the energy, which is for something else, but, in this case you want them to be in the exact wavelength, no less. >> [INAUDIBLE] >> Yeah. >> You assume that the [INAUDIBLE] coherent? >> No, so you, you don't, you don't need coherence from one photon here to have any phase relationship to the coherent photon there. What you need to have, and [UNKNOWN] generally not the case. So again, reminder, so if we send one photon, it's just a double speckle pattern. We send one photon, one electron, it interferes through that system and it forms it's own speckle pattern. And then the next one that comes in, it has, no relationship to the previous one. It just forms it's own speckle pattern, but if the system has no change it's going to be the same speckle pattern. So in this case, you don't need to have phase relationship between the two pulses. You just want to split them off and have, one of but each of them has to be, has to be fully spatially coherent with itself. And the angles have to be colinear, and they have to hit exactly the same part of the sample. So your sample is going to be here and then you're going to form speckle pattern. Here first one is going to be coming from the first pulse, and then the second one is going to be coming from the second pulse. But you don't try to add them coherently. You just add intensities basically. So one, then there's the question of how do we correlate these two pulses? Right? So, there is no detector in the world. If you want to offset this by, maybe picoseconds or maybe up to nanoseconds. But there's no detector in the world that can measure, read out the whole area detector and then measure the next pulse again. All right, so the whole, the whole idea of correlated measuring two different set of, sets of speckle pattern and then correlating them. Seems to be hopeless even if you can fix that. This is sort of engineering issue, right? Well then, how do you, what about the detector part? And here you know the answer comes from the fact that you have to remember what is the visibility of the fringes. So if I have, if I have fully coherent source or very close to fully coherent I'm going to have very high visibility, maximum to minimum. But if I add two patterns that are basically not coherent with each other I'm going to reduce that visibility and you can see it washes out completely. You don't see any fringes. So visibility of fringe is basically is related to how coherent, or how similar the two speckle patterns are. See if I go back here, there's going to be a speckle pattern created by the first pulse. And, and a different perhaps speckle pattern created by the second pulse. If the system has not changed, the speckle patterns are going to be exactly the same. You just add them on top of each other, the visibility is not going to change. But it's system has changed during that time. Then the second, you know, let's, the second speckle pattern is going to be a different, is going to be different from the first one. And so maximum and minimum start to wash out. Just like they are washing out here, when you add two different coherent logins together. And so the idea is basically what you can do is you can just simply look at the visibility. And if you, if you change the system in time with longer delay and the system evolves to some different state. You're going to reduce the visibility of the fringes as you can see here. It's going to, the maximum and minimum will start overlapping between the two different pulses. Yes. >> [INAUDIBLE] Or another one. >> Yeah. >> Oh you ca, you are speeding here your beam and reflecting it over three years that are [INAUDIBLE] are never [INAUDIBLE]. >> Yeah. >> You have, as we say, two very different pulses. Are, aren't they too much, >> So, they're different in well, they're different in what sense? Right? So they're different, of, if, if the photons in the first pulse look like a, you know, if this pulse is coherent by itself, it has a spatial coherence. Meaning that it looks that it has a flat wave front, splaying away from it. That will create a speckled pattern. And then the second pulse could actually even be a different slightly different intensities, we don't care about it too much. We can normalize for that later. But as long as it's also spatially coherent with itself and the two pulses follow the same path. That is they see the same sample from the same angle, that's, the, the only thing that matters for this experiment. Because in the end we're going to add up intensities from the speckled pattern created by this one. With intensities created by the speckled pattern from the second one. And we're going to see if they will smear out the, the, the fringes, smear out the speckles, that's what's important. Did that answer your question? Okay. >> So the main idea, and maybe I'll just skip over some of this technical stuff. Is that you can, you can look at the visibility, and the visibility is going to, change if you start with the system changes. Yes. >> [INAUDIBLE] >> Here >> Down one. Exactly. So, that only happens within the pulse duration, right? So the first pulse is coming in, and has like [CROSSTALK]. >> No, no, no. So basically think about, >> The second pulse is like 500 pounds per second delayed? >> Yeah. >> It doesn't interfere, right? >> It doesn't interfere. You add intensities. And so what will happen is, so let's think about maybe graphically, right? So if you imagine you have fringes from one pulse, this is your speckle pattern, this is a line cross section to first speckle pattern. And then you add a second speckle pattern and you add intensities, but that one, something has changed in, the system has evolved. Let's say its 100 picoseconds later, and system have evolved to different state. You will have some other random speckle pattern that will look like this. And then you add the two together, and you'll see that maximum and minimum no longer match up, right? So, you washed out your usability because the two are different. So, basically, the idea that when people use this in light scattering regime quite successfully, is that looking at the visibility. You can detect the same thing that you measure with a correlated impulse. So in other, in other words, the sort of correlating intensities, and calculating, calculating auto correlation functions. You can also just simply look at the visibility of the fringes. But then you can ask yourself, okay, but most of the time you don't even have enough. What if your, your signal is so weak you don't even see the fringes. It all looks like noise. You can do something even better. You can just simply look at the, the double counts and the triple counts. In other words if you look at your detector and you collect, there's going to be some photons that will scatter only once. They will, they will only, not scatter, they will only some pixels that will only record some photon hit. What you have to look at is how many pixels will record, two photons, come into the same pixel or the same speckle. And it turns out that the ratio, the, the fraction of this double, hit events, which is very unlikely, it goes the square of this. Will be different, depending on whether, the system in a same state and is the same speckle, or if it is two different speckle patterns. And again, you can think about it in those, in that simple picture is that if everything is fully coherent you have fringes. Which means that some regions, some pixels will have more probability of catching the photon than in the dark region than between. If the system is fully incoherent you add in completely uncorrelated speckles. You will no longer have that, that, that, that's no longer the case. And you're going to have photons more or less evenly, distributed in terms of their probability function across the detector. So by simply looking at where the, the, this two photon events happened. This is essentially the two different regimes, fully coherent and fully incoherent. By doing this measurement, you can get the same information as can be obtain from auto correlation function. >> You already have from the process a lot of background. Essentially you almost never have any doubles. >> Well so the number of doubles goes as square root of the number of singles typically, right. So you have only maybe, I don't know, lets say 1% of capturing a single photon then you have 10 to the minus 4 probability of catching the double. The nice thing is that you only need to measure, so then the question is how many doubles do you have to collect? And in order to tell the difference whether it's fully coherent or incoherent you need something like, you know, zero approximation. You need something like, the ratio between the two is 2, so if you measured 20 of them, 20 doubles. You should tell from your statistics of singles whether it's or it's coherent or incoherent. If you want of course better arrow bars than you have to count more. Yeah? >> [INAUDIBLE] your normalization of [CROSSTALK] >> It depends on normalization, so lets. >> The statistics of >> Exactly. >> Normalization. >> The statistics, well you know statistics of singles. You have to measure singles too. You have to know how many total singles you have. >> But you need to know from the two. >> Yeah, but you know, you know basically two different, so statistically you know the distribution functions for fully coherent and fully incoherent. It's polynomial and something else, I'm now forgetting, I think I have a slide on that somewhere. But you know the two distribution functions and the ratio between them is roughly two. [INAUDIBLE] in your definition of g2 here. >> Yeah, I think, I think. >> [CROSSTALK] assume that the two pulses are, exactly equivalent. >> It doesn't matter. Actually, the number of, the number of. The normalization basically becomes, so let's just, I think this slide is actually going to answer your question. So, if you think about a measurement where you have time going this way and somehow your detector is. Imagine that your detector is super fast and it, as the photons come in and you're always trying to figure out where they are. And then, instead of correlating within the same pulse, some, most of the time you're going to get single hit, right? Somewhere in different times. But then sometimes you're going to had, get double, which is what you need for XPCS. For XPCS, you need to correlate something that happened at one time, with something else that happened at some other time. If you think about this in low-count rate, the only time that you don't get 0 here, when you correlate this along the time dimension, right? For a single, it's going to be 0 all the time, because one of the intensities is going to be zero. The only time that you don't get 0 is when you have a double. And also triple, and quadruple events, but lets forget about those, because those are even less likely. So effectively, in g2, you can think about this as a number of doubles. How many doubles you capture. And then this becomes number of singles squared, or whatever. There is some renormalization that you have to worry about, but basically this is just total intensity. What's the probability of getting a single, and this is the probability of getting a double. So if you simply count your double divided by count of, total count of singles which is total flux it should give you a measure of g2. Two 0 order of approximation because there is also triples and so on. And in fact you can simulate that and we've don this in my group and it basically scales if you calculate using. The traditional XPCS auto correlation function versus just looking at doubles versus singles histogram it's basically scaled exactly the same way. The only difference here is that some knowledge the fact that you well, first of all we don't count the triples in this case. So there will be a small correction for that and also there is a slightly different parameters. But it basically measures exactly the same. So, the base, the bottom line is that in this measurement, you can just simply look at the number of doubles. If you look at go back to, if you look back at this ratio of doubles versus singles here. You should be, look at the histogram, you should be able to figure out what the contrast in your, in your experiment is. Without doing any addition auto correlation. So it is possible assuming that, yeah. >> Are you obliged in this case, for example, to take only single shots? Or can you average, for example, over those that have the same [INAUDIBLE]? >> You can average, so each time you methods, basically they're, what's shown here as beta is the degree of coherence in your overall measurement. Right? So if you add too many, so if you started with a beam that is totally spatially coherent, so that equals one. And you just measure single shot and nothing happens in that one shot, it's too fast. Then you're going to measure a speckle pattern that will have contrast of one. Means that the maximum to minimum is going to be 0 and maximum is going to be some value, maximum value. But if something has happened during those say 110 per second of the duration of the pulse. Which means that no, it's all ready, the speckle is already being decohered by the changes in the sound pulse. Your beta will drop towards lower values. And if it's decohered, it's completely drops to 0. You can also think about adding just two different snapshots, right? And if the system has not changed, you should get the same value, you know. Ideally, one, you should start it with one, but if your system has evolved, then you're going to cha, change the number. It's going to drop down. You can only decrease usually the correlation value. So think about this as a visibility factor of your fringes or speckle pattern. Again, so the main point here is that the, the drop off of this visibility. So here, for example, you can see that for the triple case, that you can see it even more clearly, that the beta here is 0.276. You measured by fit into the line. This would be what, what the line, number of triples would be if beta was equal to 1. This would be completely uncorrelated 0. And so in this case, this is actually experiment data [INAUDIBLE] I forgot to put the reference here. But in this case, they basically froze the motion of, or partially froze the motion of the atomic diffusion motion in the liquid metal. And they could measure that, that the contrast here using that technique. They could just as well measure it by looking at the doubles instead of looking at the triples. In this case, they had enough triples even, to actually measure that. Okay. So I'm actually going a lot slower than I, actually, this is one thing I that I wanted to show, just kind of as a fun thing. So this is the same group that measured this, so they're looking at metallic glass [UNKNOWN] nickel phosphorus. This is early measurement. You have to always be aware of the damage that your beam can do, especially with XFEL. They're basically drilling holes in the metallic glass, which is quite impressive, at least if you ask me. So this is the crater that was left after the beam was exposed to this meta, metallic glass yeah? >> I do have a question here, so the only create these craters if you focus? >> Well in this case, yeah. This is done. >> So you don't necessarily need with this technique the focus the beam or what is the difference focus? >> No, you only need to focus the beam the only issue with, well there are several issues that would not focus beam. So the one thing is that you, ideally you want to match your resolution via detector to the speckle size. And if you make, if you make the be millimeter sized. Then your speckles, the pixels become very, you know you have to put the detector kilometer away, or you want to make your pixels very tiny. And making your pixels small is also a problem, because of the charge spread out. So there's a lot of technical issues. But in principle, I would argue that you may even want to make your beam as big as possible, as long as your detector can, allows you to. Because that means that you are looking at statistical average over a larger region. All right, so you're looking at all correlation function of the sample, it includes many different regions. And so you perform a sample average over many, many different particles. You don't want to look, in other words, if I think about focusing too much, at some point you're looking at, say, a single particle. And then the fact that you're using coherence doesn't matter. You can just look at the intensity fluctuations from a single with single scatter, right? Single [UNKNOWN] for that matter so this is the [UNKNOWN] all of times. People were cited for example, diffusion of defects. You ideally want to have something like 100 or 1,000 defects in your beam. If you look at a single one, you no longer have to use the coherence. You can just look at, you can just track it by intensity fluctuation from incoherent view. If it could focus in that well. Okay, so let me switch some I guess I didn't hit, leave enough times. So I'll try to maybe skip some slides, and you guys will have to forgive me But basically I wanted to talk about some other part of the coherence that I think is maybe more interesting to some of you which is the lens-less imaging. And so if you are a biologist or chemist you may ask yourself well, I want to know, not only how fast something is changing, but also I want to maybe image a molecule or some fraction of the molecule. And I, I want to, I'm interested in some process, chemical process, or maybe biological process that happens here. So I want to combine microscopy with ultra fast scattering. Can I do that? Or if you are a physicist again, if you are driving some of the systems that I showed you earlier, they are all phase separated, and you may not want again, not only know what on average, your response looks like, but also what is locally happening, and ultra fast time scales. So this is very challenging right, but the way I've told you, I want to have spacial resolution down to 9 metres, or maybe even atomic scale. And I also want to know what happens in in time domain as well [INAUDIBLE], it sounds very challenging. So one approach that I wanted to talk about, is something called lens-less imaging, where the idea is instead of using a lens to de-magnify an object and look at its, you know, what it looks like. Which is done in most spectroscopy techniques. What if you just let the light diffract, eliminate the lens entirely and then this is, the de-fraction pattern you can think about is four inches from, your structure in real space. So in principle if I knew all the phases I could just invert four inches from, and I get, I get my image in real space, right? So that's sort of the simplest argument for that. And it turns out there are algorithms that allow you to do that, of course you have phases that are lost during the measurements. So you have to do some tricks in order to recover the phases that are lost. So the question is, this is sort of an example that I like to use for pedagogical reasons but it could be anything, it could be a molecule, imagine whatever your favorite system is that you want to study, it creates a Foulier transform which looks like this and in this case, just so you know, this is a magnetic. Domains that I'll show you in a second and they creates a speckle pattern like that. And, and the question is, can we invert this diffraction pattern? And the answer normally would be, no, because we've lost all the phases here. What we measured is, how many photons arrive at different pixels? And that's the speckle that we see here. We don't know the relative phases at which this, this photons arrive. With this back to each other. By the way, let me just say one way to figure out the [INAUDIBLE] phases is to mix in another inference beam. And that's called inline holography. I, I'll show maybe one slide if I have time about that but it has some other complications and disadvantages for for, for different reasons. But can we actually do that without mixing the beams? It turns out you can if you over sample it, meaning that if you measure the spatial distribution of intensities, that it's finer, that is finer than the dimension of the speckle. So the size of the speckle here is inversely proportional to the total size of the, either the beam or the object that you eliminated, and that's called Nyquist frequency. The highest frequency that you see here in Fourier transform corresponds to the largest dimension of which you allow its interference to happen, right. So that's just speckle size corresponds in this case to about ten microns, the dimensions of our sample, basically, of the beam. So it turns out that if you measure the intensity variations finer than that frequency, the Nyquist frequency, by at least a factor of 2. In reality, by a factor of two and a half or so, you can actually invert the diffraction [UNKNOWN] for the phases, in a unique fashion. And it still sounds a little bit like a magic. So this is my magnetic film that I'm trying to image. This is the diffraction pattern. There is no, basically, lens in the way. And this is just a pinhole that selects a single coherent volume that is about ten microns in size, 'cause remember this is the pinhole that. [UNKNOWN] would have used [UNKNOWN]. So there, the system that I'm studying here, and again, if you're not into magnetism, you can disregard this, but in ten seconds, I have the maze-like, randomly meandering sort of patterns. So [UNKNOWN] when it spins is either up or down. Out of the, out of the plane of the film. And so what i'm trying to image in this domains. Some of them have one color let's say going down. The other one i'm going to call spin up. How do I see magnetism with X rays again? Again this is sort of a ten second henway of an explanation. You use resonant X-ray, Magnetic Dichroisms. So the idea is that if I tune to one of the resonate edges, in this case we have gadolinium and iron in the system. You can use either the Gadolinium edge, or iron edge, we happened to use Gadolinium. If you use, if you tune to one of these edges. In this case we used I believe we actually use, four, maybe five. So let's say we become five here. You're going to have two different channels, you're going to have to different cross sections depending on whether you have circular polarized light that is either oriented in the same direction as the spin or the opposite direction to the spin. You're going to have two different absorption cross sections, this will only happen at the edge, you walk away from the edge the two cross sections become the same again. And you can notice at the other edge of the M4 in this case, the contrast is flipped. And you can actually use that and that has to do with the fact that the probability of promoting the electron to the split thermal surface, is actually different whether you, your, depends on basically relative fluoridation of, of the light and the spin that your thermal adhere. That only happens with the resonance. So the main idea here is that when you tune to the resonance, all of a sudden you see through the domains, that will scatter differently. If you are away from the resonance you don't see the domains at all, it's just thin, thin magnetic field. So when you do this experiment when you tune to the resident edge, we get additional speckle pattern, if we're away from the resident edge we just see the edges of our thin hole. All attenuated uniformly by the fil, we don't see the magnetic domains. And so the idea is, can we actually phase this diffraction pattern that is, that is shown here? Can we solve for what configuration domains looks like here? And the Original Phase in, Phase Retrieval Paper by David Sayre, which he basically formulates that he can do this. And it comes from the Shannon's theory. And Shannon's the famous scientist who basically deal, dealt with a lot of information process information theory, who figured out that it's possible to do that. So the reason I like to show this to the students is that this is the entire paper it's not an abstract here like an APS March meeting. This is the entire paper, it has three references. And it says extension to 3D is obvious, and you can ask people who try to do this in 3D whether it's really obvious or not, but basically he just tells you that if you measure it at fine enough spacial frequency you can do that, he doesn't tell you how to do this, he just says, mathematically there should be a solution somewhere. So how do we deal with this mathematically? Well, what we do is we basically alternate, we have a set of constraints. We have reciprocal space, Fourier transform space, Fourier space basically, where we know the amplitudes but not the phases, so I'm going to show amplitudes as color, and phas, phases as color and amplitudes as brightness, in this, sketch. And we have real space constrained where we know that our size of the beam, or size of the object fits in some box. And because it fits in some box that gives rise to this spackle, the spackle site here is the size of the box. That's what we have to sample, by at least a factor of two. And what we do is we start with random phases, we throw some random colors basically on our m, amplitudes that we measures, the intensities that we measured We know how to do, project back to the rule space by doing inverse Fourier transform. Because we over sample it by, let's say a factor of two and a half, this object is going to be bigger by a factor of two and half. It will also have a lot of other problems like the, the amplitudes could be, the densities could be imaginary, it could be negative. So I applied the real space constraint where we remove everything outside of the box that we know it fits in. And we made updates, [UNKNOWN] our density with real values instead of imaginary or positive values. Yes? >> [UNKNOWN] >> It doesn't matter. It's the same idea. [UNKNOWN] For those of you that dont know what it is, nevermind what it is, but yeah I can. >> [UNKNOWN] If you have like overlap like with slow motion, the overlap is not going to be exactly the same between, two exposures right? >> So this is, this is general, so this is general for not only take over [INAUDIBLE] single molecule imaging for anything. So if you're looking at single particles. This is a general algorithm, so I showed you tech, actually I don't think I showed you typography. In this particular case that is, it is an experiment done using typography. I wasn't prepared to talk about it too much but basically typography is the idea that you can build additional constraint, by overlapping two different regions in real space. If you have extended object, you may want to do that. But you don't have to do that. In principle, a lot of systems that we study, other people study, don't even use that for simplify. >> If you have one motion between the. Your, yes, your overlap, so you have one exposure with the walls in one position, and the second exposure the walls have slightly changed. >> If you're looking at dynamics you don't want to use typography, that's my argument. You want to use a single, a single shot a page. We can talk about this later,but I wasn't, I wasn't planning even to talk about techography too much even though this. Technically use [UNKNOWN]. But it's ideal, so, let me just go back to, this algorithm is, more or less generic to almost anything that people, any, [UNKNOWN] algorithm that people use, where the idea is you alternate, so once you updated your real space image, and it's going to be wrong because you chose some wrong, random set of phases, when you fully transform. Back into the reciprocal space now, all your phases are wrong and your amplitudes are wrong also. And so then you have date your amplitudes with the one you measured, and then keep going into the slope and it turns out that over, over time you basically will converge on some solution that is your solution that you began. And in basically, in, in some mathematical way you have two surfaces. >> you have no space constraints the space of the object is finite and also perhaps the density is real instead of imaginary they are positive instead of negative then you gave recipicle restraints which is the fact that the measure down put you dont know the phases there is a solution somewhere here this is what sayers paper told you that it exists. Yes. >> Is there only one solution? >> Is there only one solution. So, yes. There's only one solution if you oversample by more than factor of two in 3D. In 1D, for example, that's not the case. In 2D and 3D, there's only one solution. But the real answer to that is, that even though there's one solution, finding it is a problem. And also, the solution is sort of fuzzy. Because these things are fuzzy, because they have. Error bars, right, so that's always, a question that you have to wonder about. Just because mathematically it is one solution doesn't mean that, when you do multiple reconstructions you always find more or less the same solution but it looks slightly different, but it's around, it looks similar enough, it just means that you arrived that. You arrived at this point, instead of this point, because this is kind of a fuzzy surface instead of being a smooth surface. It has [UNKNOWN] basically. Anybody who is experimentalist knows, that nothing is perfect. [LAUGH] So then, the idea is you can start with initial guess, and you know how to project from one surface to the next. And the only problem is that sometimes instead of s, conversion in a solution by this pr, alternate projections, you may get stuck doing this forever, right. And that's why this, this topography helps to overcome that. But in many other ways you don't need that because there are other ways to basically overcome that as well, numerical ways. Okay so this happens to be techno-graphic example of how this exchange happens in assimilation where you throw away the base and then you can recover the face of the famous composer, and you can also do the same thing from like many of the main instances, I'll show you the remaining, I'll show you this movie. This is iteration, this is how many laps around this, this algorithm. How many loops have we, have we done. And again, we start with the random phases. We don't tweak anything deliberately, we just let it run by itself. We started with completely random numbers, and then over time you can see that it builds up some structure. You can begin to see is a, has a sort of real domain like structure. And you can also watch your watch something of, phase retrieval transfer function. Which is basically tells you the error function in a concede drop. You can actually figure out what's the, what's the, how well what you compute in there matched up with what you measured. And that also gives you a little bit of feedback of whether it converges or not. So let me just kind of skip, because I think my time is k, I wanted to show a couple examples of that. So I'm going to skip a whole bunch of my stuff, which is not relevant. You can ask me about that. Well the question is, can we actually do this? In this case, maybe to answer your question. So you asked me, what if your domain actually moves you in the measurement? And this is a quasi-static measurement meaning that we. We'll look at the remainders, we, we change the magnetic field so we change the magnetic field so we change the configuration of the domains, we do the imaging and the imaging takes time. It takes minutes to do it, and so if you stare here you can see that this domain actually jumped during the measurement, that's exactly what you asked me, and you can actually catch that. But you don't know when it jumped. You just know that it jumped during some time during the measurement that's why we see half of it here and half of it there. Its a little bit like you take a long exposure, of me talking here and while you are taking exposure I jumped here, you're going to see half of me there and maybe some blurry stuff. So that's basically what [UNKNOWN]. But the question is, can we actually do this at ultra fast version? Can we do this, can we repeat the same experiment in a faster domain? I'll show you a couple of examples, of people already doing that, even though I don't think it's, you know as many examples as, as you know, I thought there would be. But, just kind of to remind you, so this coherent X-ray difractive imaging could image not just electron density which what most people are perhaps interested in. But it could image other things. And I showed you an example of my [UNKNOWN] domains. A lot of us are working on lattice strain and defects. And you can prolective domains you can also look at ionic deficiency. So you can measure a number of different physical processes happening in your system. The way you coupled to the strain, this is kind of a simple explanation, if you think about you know, lattice lanes. If they're periodic, then all the planes would scatter to the Bragg peak at exactly the same phase, that's the Bragg law. But if I started shifting them around, if I displace them using these arrows. I moved some of these atomic planes this way. And then I started shifting them back the other way, creating the displacement field sort of going negative here. What I'm doing basically, is I'm changing the phase at which these planes contribute relative to this planes. They used to be in phase but now they're slightly out of phase. And what I can measure using the coherent diffractive imaging is that phase, right, because what you get in your space in your density, you get your density of your plate, as a, as a real part, and the phase corresponds to the displacement of the imaginary part of what you measure. So when you fully have transformed back, now generally have a complex function with an imaginary part, that corresponds to the phases, can tell you about the strain in the crystals. A lot of people are doing that. So he measures this when it's the projection of the strain on the vector that you are measuring. So let me you know I think I'm going to skip some, but this is just sort of examples again of quasi-esthetic word that are possible. So in this case we are measuring some faceted nano particles and we can see the split in the [UNKNOWN] that is due to the domain wall and the slow angle domain boundary. That shows up. I'll show you just very briefly, a couple of examples. In this case we're looking at a nanoparticle and again using the phasing we can measure the strain on different parts of the nanoparticle, and we can see that the strain at the corners, for example, is higher in these two corners than near the flat surfaces, which is consistent with the lapaz. Pressure that you would expect from the fact that it's a higher curvature here. There's sort of another quick example. So, you can get a nano scale distribution of strain in three dimension. It can slice it in many different ways. People have measured [UNKNOWN] electrics. This is a, a group of [UNKNOWN] at Argon and then they can measure for electric domains using exactly this technique, by using basically the phasing. So I wanted just to, highlight a couple of examples very briefly, this is my group's work on lithium ion diffusion, and all of this is quasi static measurements, meaning that it's not down x, at x itheals, it is done at six different sources, we can take time to collect these defraction patterns until we get very high resolution, but let me kind of maybe show you a couple of examples of some ultra fast work. Let me just skip though some of this, all this stuff. So this is the idea for the ultra fast work, that, has anybody talked about this already? Okay, so maybe I should, then I decide, then I should definitely talk about this. So the main idea, one of the interesting ideas that came out when LCLS was discussed was can you do a single molecule imaging. Using this phasing techniques, and the idea is that if you have a molecule, let's say a single biological molecule and the pulse is short enough, when the pulse goes through you'll make a speckled pattern. That is going to be basically interference of all those different atoms. And, if you could phase this this speckle pattern, you can go back and, and you see what the single molecule looks like. And the pulse is fast enough that, the molecule will block, this process called Columb Explosion, because you ionized. You stripped all the electrons and now there's a bunch of ions sitting there and they're going to repel each other, by Coulomb repulsion and they're going to explode, so their, their particles going to explode, but you can, if the x-ray are faster than that explosion, process, you can actually, you can see what the single molecule look like before you, you, you blew it up. Yeah. >> [UNKNOWN] computation would be feasible. It sounds like >> OK, so that's, that's the, that's one of the key questions. So right so this is a big, this is a big problem, right? So I sort of showed this as one slide and in Power Point physics it works great. In, in the real physics it's a little bit more challenging so let's talk about one of the key challenges here. So it sounds very good mathematically. So, first of all you have to figure out how long it takes to blow it up. And people have used molecular dynamic simulations, and there's a question of how, you know, how well do we know about these processes because, you know, I would argue maybe we don't, but it tells you that if your below let's say ten femtoseconds, if you pulse it short enough maybe you can do that. And, some people, say that it will on one femtosecond. I'm actually not working on this field myself but maybe that's changing every, every day this week. One challenge, sort of physics challenge is, you have to make sure that there is a particle waiting there, sitting there waiting for the beam to hit it, right? And so the injection, of molecules is a separate technological problem, how do you eject them? Ideally you want to inject them closed in a water. Or, yeah, some sort of aqueous environment, like helium or, or water droplet. So that you're looking at a sort of natural state of that biological molecule, not some dried up molecule, right? But that's, that's one challenge of how, how well can you time. You know, you only have 100 pulses per second, how well can you make sure that there is a particle there when the pulse goes through? The other sort of okay, so the computational challenge, and again this is one of the sort of most beautiful mathematical ideas that I've heard, in a while, but I think it has a lot of, a lot of challenges ahead, maybe one of you guys can actually help us solve it. By the way I believe that this is going to be possible. It's just a matter of getting the time. The, the, the problem is that, if you don't have enough statistics you're going to get a very noisy diffraction pattern, right? You don't have the luxury of. The way we do it, just collect many you know, little, letting your sample sit there, and collect your data until you get, you're satisfied with the quality of the Diffraction pattern. Most of the time, you're going to get a very noisy diffraction pattern. And the diffraction pattern you're going to get, is only the [UNKNOWN] sphere projection, through your three-dimensional speckle, you know? You know so you have some three-dimensional [UNKNOWN] of your molecule electron density. And you going to get a projection in a single measurement, right? You projection that, depending on the orientation of the molecule. But the ecosphere will cut through that, your speckled wall in 3D, And the projection is going to be very noisy. And so the question is, if you inject this, well, we can say, well I'm going to do this experiment again. But I, I'm going to inject a different mol, the same molecule again. It's going to get a different orientation alright. I can't control orientation very well, so it's going to be, be a random orientation that I'm going to expose it at and mathematically you could argue the following, you could say that all of these projections, random projections, they're going to go through the center through the cuticle zero point. And. If you know one of these, not maybe perhaps noisy defraction patterns, and there's going to be a different, let's say, consider a different random arraingiation of the fraction pattern slices that specklable at different angle, they going to have a common line here somewhere. And because you have a common line, you can maybe try to stage them together mathematically. You find the line that has the highest. Degree of correlation in a diffraction patterns. And so you say well I know now there's two of these diffraction patterns were taken, I know there is one common line there, I don't know it's like a hinge now, I don't know what the relative angle is, well if you took the third one and you find the common lines between the first two, then it allows you to kind of fix them in space. And perhaps there is a big uncertainty in, in finding each one. But then you keep doing this experiment at different angles. And, if you, are very good at classification of the fraction patterns, you have to understand what they look like. And, during this orientation, in, in reciprocal space, perhaps the hope is, perhaps you can actually. Stitch them together, to get a full three dimensional image. And again it sounds good mathematically but experimentally we're dealing with diffraction patterns that are super noisy, right? There's no, you know, we're trying to image something that is mostly hydrogen and carbon and, and it doesn't scatter very much and it's tiny little molecules. And since doing this is is very challenging, and you are not satisfied until you get an atomic resolution, right, it's not like when I image my body in the main, twenty nanometers maybe is enough, with ten nanometers maybe is enough. Here you want to have atomic resolution otherwise it's not useful, right? But this is a very challenging, approach but it may still work, is a lot of other. You get technical problems and technical difficulties that have to be overcome here computationally, technically you know, in terms of detection development and so on and so forth, but this could be an interesting development. OK, let's actually show you a couple of examples. I have like five minutes, so maybe I'll skip, well this is what we've, you know, this is sort of one of the first demonstration of. This ultra-fast imaging where you destroy the sample with one shot but you can get an image. This is, in this case, the image is sort of artificial image. And this is basically when there was a summer school here seven years ago, this was I think was widely circulated as sort of state-of-the-art. So you can etch something in a silica-nitrate membrane. You can blow it up and in a single shot you get a defraction pattern like this. It's already, after this shot goes through it felts the whole membrane, so it doesn't, doesn't leave. It no longer exists. This pattern only sees the beam for what of a 100 tento seconds, but reconstructing the single pattern can give you reasonable you know, I'll apply under what the sample look like. So you can, in principle, you could maybe use this as as this destroy, [INAUDIBLE] destroy [INAUDIBLE] So what has happened since then? And this is not a complete study but, one of the, I think exciting results of this lensless image interrupt based on coherence. It's the single mimivirus of particle imaging that was done here at LCLS. And then you have basically phase in this, this diffraction pattern and, and looking at two dimensional projection of different but supposedly identical, you know, version of the virus. And you can learn about this sort of internal structure by doing this, this phasing using this phasing algorithm. So this is I think the best, maybe somebody can tell me if there is even better example of this, this phasing approach. But so far this, I think, one, one example of how this, this spatial coherence and lensless imaging could be used. And and the free electron lasers to get some of this information. By the way, I think the second author here is, was from from [UNKNOWN] group was a, was a student here seven year ago, so it's kind of another good example that people who come to this school actually go on to do great stuff. Another example that I want to sort of pull out is Henry Chapman's work on, on nanocrystals, and, when you get a nano, when you, when you look at nanocrystals. So, there are a lot of biological crystals that are difficult to make microscopically large that could be used that you can determine their structure instead of using traditional methods, and these nanocrystals are very fragile, they will melt. In a signature beam much faster than you can, than you you need the amount of time that you need to to solve for for the full structure of those of that crystal. But it turns out the next ETL it's not a problem you can still use basically a a single shot of radiation single shots from multiple nano crystals. In this case, you can see the Bragg peaks but you can also see the fringes between the Bragg peaks. And essentially the fringes, you can think about these fringes as, telling you the shape of the particle, and the location of the Bragg peaks telling you the structure, internal structure of the unit cell, the unit cell. And so, in this experiment. Henry and his team were basically able to phase not a single molecule but a very small crystal that would probably be very difficult or impossible to do using sort of traditional synchrotron [UNKNOWN] methods. This is sort of another example. On the third example that I wanted to use, in this is the work of Jesse Clark who is. Some are here at Stanford where they looked at 3-dimensional propagation of acoustic phonons, in, inside of a single nanoparticle. And the idea is that basically they pumped it, using the infrared laser I believe. And created shockwaves going through the ladders. And then the image, the expansion of the ladders in response to that. Optical exertations in an nanocrystals by analyzing the defraction patterns and inverting them. So that allows you to look at the fault ray with the, with the phonon propagation inside of the crystal in 3D at this ultrafast time scales. And so the idea is that if you simply look at the location of the graphic, you can see that it basically, it oscillates as soon as you hit it. So that's this the shockwave propagation if you will as a function of delay time between the optical pulse and the X-ray pulse that probes the system. But it can go one step further and it can basically reconstruct not only the shape of the particle, but also the internal. Strained distribution which tells you displacement field, so called displacement field that, that which tells you, what is happening inside the nano particle as it, as the protons propagate it and make the lattice expand and contract. And so this is the group, a lot of the science that I showed you, some of this is work by, by several people. In particular Sebastian and, and Jim, and a little bit of Leandra, and so maybe I'll stop here and, I'm, I'm a little bit over my time, but I'll ask any questions that you might have. [APPLAUSE] >> For more, please visit us at stanford.edu

Methods

Conventional Raman spectroscopy is limited to the near-surface of diffusely scattering objects. For example, with tissue it is limited to the first few hundred micrometres depth of surface material. Raman spectroscopy is used for this purpose in many applications where its high chemical specificity enables chemical mapping of surfaces, e.g., tablet mapping.[5] Measurement beyond the surface of diffusely scattering samples is limited because the signal intensity is high in the region of laser excitation and that dominates the collected signal.

The basic SORS technique was invented and developed by Pavel Matousek, Anthony Parker and collaborators at the Rutherford Appleton Laboratory in the UK. The method relies upon the fact that most materials are neither completely transparent to light nor completely block it, but that they tend to scatter the light. An example is when a red laser pointer illuminates the end of a finger - the light scatters throughout all of the tissue in the finger. Wherever the light goes there will be some inelastic scattering due to the Raman effect, so, at some point, most parts of an object will generate a detectable Raman signal, even if it is not at the surface. The trick with SORS is to make a measurement that avoids the dominating excitation region.

A SORS measurement will make at least two Raman measurements; one at the source and one at an offset position of typically a few millimetres away. The two spectra can be subtracted using a scaled subtraction to produce two spectra representing the subsurface and surface spectra. For a simple two-layer system, such as powder in a plastic bottle, the powder spectrum can be measured without knowing the bottle material or its relative signal contribution. To do this without using an offset measurement would be severely restricted by photon shot noise generated by Raman and fluorescence signals originating from the surface layer.[6]

Scaled subtraction works well for two-layer systems but more complicated examples, such as where the overlying material contains components included in the sub-layer (living tissue, for example), may require multi-variate analysis. When multi-variate techniques, such as principal component analysis are used, it is necessary to take several spectra at varied offset distances. As the spatial offset increases the ratio of the spectral contribution sub-surface/surface increases. However, the total signal also decreases with increasing offset, so the ratio cannot increase forever in a practical measurement.

Inverse SORS[7] is a useful sub-variant of SORS that improves certain measurements such as analysis of tissue in vivo. Rather than use a spot collection geometry and a circular spot for illumination the constant offset can be maintained by illuminating the sample with a ring of light centred on the collection region. This has several advantages, including lowering the total power density and allowing simple manipulation of offset distance.

Micro-spatially offset Raman spectroscopy (micro-SORS) combines SORS with microscopy.[8] The main difference between SORS and micro-SORS is the spatial resolution: while SORS is suited to the analysis of millimetric layers, micro-SORS is able to resolve thin, micrometric-scale layers.

References

  1. ^ P Matousek; IP Clark; ERC Draper; MD Morris; et al. (Apr 2005). "Subsurface probing in diffusely scattering media using spatially offset Raman spectroscopy". Applied Spectroscopy. 59 (4): 393–400. Bibcode:2005ApSpe..59..393M. doi:10.1366/0003702053641450. PMID 15901323.
  2. ^ M. V. Schulmerich; K. A. Dooley; M. D. Morris; T. M. Vanasse; et al. (2006). "Transcutaneous fiber optic Raman spectroscopy of bone using annular illumination and a circular array of collection fibers". Journal of Biomedical Optics. 11 (6): 060502. doi:10.1117/1.2400233. PMID 17212521.
  3. ^ C. Eliasson; P. Matousek (2007). "Non-Invasive Authentication of Pharmaceutical Products through Packaging using Spatially Offset Raman Spectroscopy". Analytical Chemistry. 79 (4): 1696–701. doi:10.1021/ac062223z. PMID 17297975.
  4. ^ C. Eliasson; N.A. Macleod & P. Matousek (2007). "Non-invasive Detection of Concealed Liquid Explosives using Laser Spectroscopy". Analytical Chemistry. 79 (21): 8185–8189. doi:10.1021/ac071383n. PMID 17880183.
  5. ^ M. J. Pelletier (1999). Analytical Applications of Raman Spectroscopy. Blackwell Science. ISBN 978-0-632-05305-6.
  6. ^ N.A. Macleod; P. Matousek (2008). "Deep Noninvasive Raman Spectroscopy of Turbid Media". Applied Spectroscopy. 62 (11): 291A–304A. Bibcode:2008ApSpe..62..291M. doi:10.1366/000370208786401527. PMID 19007455.
  7. ^ P. Matousek (2006). "Inverse Spatially Offset Raman Spectroscopy for Deep Noninvasive Probing of Turbid Media". Applied Spectroscopy. 60 (11): 1341–1347. Bibcode:2006ApSpe..60.1341M. doi:10.1366/000370206778999102. PMID 17132454.
  8. ^ Conti, Claudia; Colombo, Chiara; Realini, Marco; Zerbi, Giuseppe; Matousek, Pavel (June 2014). "Subsurface Raman Analysis of Thin Painted Layers". Applied Spectroscopy. 68 (6): 686–691. doi:10.1366/13-07376. ISSN 0003-7028. PMID 25014725.
This page was last edited on 20 October 2020, at 07:13
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.