⚽⚽ Mona Zebarjadi sets out to answer the question: when do we know whether the theory is wrong?
Thermoelectrics Session, MRS Memorial
Ten years ago today, I attended MRS for the first time in my life. I was a third year graduate student, and I had some data on Monte Carlo simulation of heterostructures.
It was extra special because I was also participating in an MRS graduate student competition. Basically, this meant that I’d come to MRS, present my work, and then — based on my presentation — people would decide if I’d get the gold medal or the silver one, etc.
The competition began by introducing the judges to us. Sure enough, Millie was one of those judges. That was the first time I saw her, and although I knew she was famous, I didn’t know very much else about her, back then.
I finished my presentation and the judges started asking questions. This is ten years ago, of course, so I don’t remember any of the questions — except for one, Millie's question. I still remember her question, word for word!
Millie said, "You presented something relevant to heterostructures, and you're proposing them for thermoelectric applications. Now, you've been dealing with hundreds of nanometers in terms of length scale. If I squeeze it down to real nanometer scale, and if I repeat your single barrier, I'm going to get a superlattice. There is a theory for that and, theoretically, we know that superlattices are advantageous to bulk materials and they have superior thermoelectric properties. Many people have tried that experimentally, and no one has been quite successful. Do you know why?"
I hadn’t expected a question that went so far beyond what I presented. A hundred thousand panicked thoughts flashed through my mind as I stood there, frozen, just staring at Millie.
My first thought was — well, if the theory and experiment don't match, then the theory must be wrong! But then my second thought was: but, wait, I don't know enough about the theory to be able to properly criticize it — so I can’t go in that direction. And then I thought — didn't I hear about an experiment that actually confirmed the theory? Is this a trick question? But I couldn't remember the details of that experiment, either. And then, as I was looking at Millie, I thought: Oh my God, wasn't it her idea? Didn't she propose that? And I considered resorting to panicked pleading, with something like, ‘I'm so sorry, I didn't read your papers; I don't know why you were wrong…’ But panicked pleading probably wouldn’t look very good in front of an audience, so I couldn’t do that, either.
So, in short, I was in panic-mode. A friend in the audience realized I was in a state of shock, and called out my name to bring me back to reality. This snapped me out of it. That was when I realized I actually did have to answer the question.
So I blamed it on defects and rough interfaces in the experiment, and... I can't remember what else I babbled, that day. All I knew was — that question would be one I’d remember for the rest of my life.
Two years later, I had the privilege and honor of joining Millie’s group as a joint postdoc, and I worked with her group for three years.
So here we are, 10 years later — and I want to revise my answer.
Millie, if you’re listening up there, I still don't know the answer to your question! I'm still not quite sure. But I’d like to talk about it, today.
It turns out Millie asked many other people this same question, as well, and they were also stumped and many published papers that attempted to answer the question. Some have pointed out to leaking of the electron waves due to finite barrier heights, some to the importance of the thermal de Broglie wavelength, and so on — but I’d like to also point out that there might be a simpler explanation: namely, the fact that we don't have a complete predictive theory to tell us what the best thermoelectric material would be and how you would make it.
The theories we do have all require input from the experiment; so what generally happens is that you make a material experimentally, first, and then you try to optimize all the parameters theoretically. This is very difficult to do with a superlattice, because, with so many parameters, it's very expensive to optimize. Perhaps we could use simpler structures — monolayers, bilayers, etc. While, these are not necessarily easy to handle, they are much simpler to optimize experimentally and theoretically. You could also use two layers, where you have one layer and add another for surface doping. Or, even better, you can add a layer in between to make a modulation doped structure. You can also play with this and make a stack of them and look at the cross-plane transport, which is the edge of the computational capacity that the first principles based calculations can handle.
Let's take a look and see why this is. If there is one non-trivial property that thermoelectric materials should have, it's the asymmetry between hot electrons and cold electrons. This concept has been coming back over and over, during the history of thermoelectrics. You recall that Ioffe said semiconductors are good for thermoelectric applications — this is because they have a bandgap. And if I put my Fermi level in exactly the right spot, I can break the symmetry between hot electrons and cold electrons — electrons above the Fermi level and electrons below the Fermi level. When Millie pointed out the sharp features in these nanostructures, she was pointing out the same concept. You are breaking down the asymmetry between hot electrons and cold electrons. You have more of one type, compared to the other.
I'm going to come back to this point later on, in my talk.
She also, of course, pointed out to the lattice thermal conductivity reduction.
Let me also tell you a little bit about 2D layered materials and the other properties they have. One of the most important parameters in thermoelectric applications is the size of the bandgap. The position of the energy levels will tell you what the Seebeck coefficient is, in these types of materials.
One of the advantages of 2D layered materials is that you can actually engineer the bandgap. This is much simpler than in the case of bulk materials. In the case of 2Ds, you can simply change the number of layers or apply stress/strain to modify the bandgap.
The graph above shows the local density of states plotted vs. the position. And, as you can see, the white area in the middle is the area where there is a gap.
This figure is of black phosphorene sandwiched between two graphene layers and finally between gold. And, as I'm increasing the number of black phosphorene from 1 to 3 to 5, the bandgap shrinks, and I can engineer the band bending and the band diagram of this structure. If you look at the q-resolved transmission functions of these structures, the same thing is happening. So, this is the transmission function, and you can see that here, I have a gap. But because I have only one layer, there is tunneling going on, so the transmission is not zero. As I'm increasing the number of layers, now tunneling is suppressed, and we are opening a gap in the transmission.
This has advantages for thermoelectric applications because you don't want to have tunneling under the barrier. What you want to have is thermionic emission over the barrier. So you can change the bandgap by the number of layers.
This is another example that shows the corresponding change in bandgap based on the number of layers, this time in MoS₂. You can also change it by applying strain or stress. So you can put it on different substrate, in order to create a strain, and you can see the gap is changing.
You might say, “Well, why should I care?” The reason you should care is that the Seebeck coefficient would change as you are changing your gap.
Here is an example calculation for TiSe₂, and as we increase the strain, the Seebeck coefficient also increases:
Another advantage of these 2D materials is the fact that you can gate them. So as I mentioned, it is not easy to optimize all the parameters — but here, at least for carrier concentration, you don't need to optimize it. You don't need to make hundreds of samples and find out which one is the best! Instead, you can take a sample and change the carrier concentration by gating and figure out the sweet spot for thermoelectric applications.
Let’s talk a little in terms of power factor, because there could be a material that has a very high power factor. This is a collaboration that I've done with Professor Andre’s group, and you can see that the power factor that we're measuring is quite high. And I’m using an unusual unit, here — I’m using the unit of watt per meter Kelvin, so that we can compare it directly to thermal conductivity. This is twice higher than the other reported power factors in the literature.
Here is a graph. This high power factor reported using the blue arrow is YbAl₃, which is supposed to be about five (in these units) — and we're getting around 10! So in terms of power factor, that could be very high. And remember, we're talking about in-plane power factor.
But you might point out that they also have a high thermal conductivity. And of course, that is true! A lot of these 2D materials have a very high thermal conductivity — graphene, for example. That could be useful for certain applications like electronic cooling, where you're actually pumping heat from a hot spot to a cold spot. In that case, you’re not dealing with power generation mode, so you don't necessarily need a high ZT material — what you really need is something to pump heat both passively and actively.
So, overall, for electronic cooling and applications like it, where you require high thermal conductivity combined with a high thermoelectric power factor, I’d recommend looking into 2D materials. I think they’d be good candidates.
And lastly, I want to just point out that, many times, we borrow concepts from other areas of science, which means that the theory — especially in the case of thermoelectrics — might be a little bit misleading. It doesn't mean that the theory is wrong. It just means that you have to be careful when you're using it.
As an example, I want to bring up the story of optimum bandgap. This has been in the literature for quite a while, and people would always say that the best bandgap for thermoelectric materials would be 6-10 KBT or higher than that. However, if you actually plot the power factor versus bandgap divided by KBT, you don't actually really see that 5-6 KBT! The only material that really exhibits this is bismuth telluride.
Now, you've seen this before, in Professor Hereman’s talk, just a few minutes ago, when he mentioned cobalt. Cobalt is metallic, but it still has a very high power factor. So why is that? Is the theory wrong?
No, the theory is not wrong, per-say. It's just overgeneralized. You have to realize that the theory is valid if — and only if — bandgap is the only parameter that is breaking the asymmetry between hot and cold electrons. This is an important caveat, as most of materials will find a different way to break the symmetry, and as soon as the symmetry is broken, and then bandgap becomes essentially irrelevant.
I'll show you a couple of examples. The first is graphene, which I just mentioned. Graphene doesn't have a gap and is very symmetric, however, it does have charged puddles — and they are what breaks the symmetry.
Now let’s look YbAl₃. This doesn't have a gap, either! However, it has a record-high power factor. Why? Because it comes from the sharp F-orbital state of the material, and that breaks the symmetry.
We can also have resonance states, which is something Professor Hereman's group brought up. And those can break the symmetry as well.
So, overall, you can see that any time you are breaking the symmetry, you can achieve the equivalent of having a bandgap.
Last of all, I want to mention the case of semi-metals. Now, most people who are thinking of experimenting with semi-metals will look at the literature, sigh, and then put the semi-metals aside and move on. Why? Because semi-metals don't have a gap, so an overgeneralization of the theory would have you believe they’re useless. There was even a report on the theoretical calculations of mercury telluride (HgTe) in a semi-metallic base, and they predicted almost zero ZT for this material.
So I looked at HgTe, as well. I wanted to know what was going on.
Well, it turns out, if you do the calculations using a cheap PBE calculation, you won't see much difference between the effective mass of electrons and holes. And since there is no asymmetry, you would expect a low Seebeck coefficient and very low ZT.
However, if you do more expensive HSE calculations, you can see that, actually, there is an effective mass difference, which means you’d expect the Seebeck coefficient to be higher. Basically, we are breaking the symmetry by the difference in the electron and hole mass.
On this graph, you can see exactly what I’m talking about. The dotted black line is what the PBE calculations show — and you get a Seebeck of around 50 uV/K at intrinsic level. The red line is the HSE calculations. As it turns out, when you do HSE calculations, the Seebeck is higher than 50. So now it’s interesting.
Well, just to see if we were right, we did actually go ahead and made the sample! And once we made it and measured it, we discovered the Seebeck coefficient is actually around 150uV/K! So it's even more than what the theory predicts using HSE. Why? Well, it’s because of the constant relaxation time approximation that theory assumes — and we can correct that by using energy dependent relaxation time approximation.
So Millie, if you’re listening up there, I’m still working on finding out the answer to your question, but here is what I’d like to add to my answer from long, long ago.
Overall, Millie, I think the theory is good, but it can be misleading if you don't use it appropriately. And I think that people should be aware of that, and avoid overgeneralization with the theory — otherwise, we might miss out on very promising thermoelectric materials.
That is my overall message.
And with that, I just wanted to acknowledge my collaborators.
Thank you so much.