![]() So by biasing colder, you can increase your "input headroom", but at some point you start getting crossover distortion. The colder you bias, the less "overlap" there will be between the pair of output tubes, so the voltage swing you can accommodate is larger (This doesn't mean more power, necessarily, because the lower the current goes the less power is dissipated). This means that you can theoretically swing a voltage (almost) twice what you could in class A operation, and therefore put out ~4x the power (not that simple, though.). In the case of a class AB1 push-pull amplifier, it's not so simple because at any point in the input cycle, one tube should be conducting and the other should be (close to) cutoff. Of course, optimal headroom means symmetrical clipping, which means near total cancellation of even order harmonic distortion. If you want maximum headroom, you should be hitting both "walls" simultaneously. If you didn't you would hit a "wall" on one side before the other, causing you to clip asymmetrically. So ideally to get the maximum amount of headroom, you would bias exactly in between the two nonlinear regions. In the case of a vacuum tube in class A operation, the linear region of operation is bounded by Vg - Vk = 0 (we will naively assume this, but in actuality it is a bit more complicated) on one side, and on the other side by Vg = -Vcutoff where -Vcutoff is the grid voltage at which point no current flows from plate to cathode (which will depend on the load line). With any sort of active device, be it a BJT, MOSFET, or tube, you generally have a "linear" region of operation sandwiched in between two "nonlinear" regions of operation. A lot of times, people think of it more as "how loud can the amp get before it distorts." That's a much more complicated situation, so I'll try to explain a bit here: Strictly speaking, it is the magnitude of an input signal that an amplifier may amplify without transitioning into a nonlinear region of operation. I think "headroom" is a oft misused/misunderstood term. Or are there other factors I am not understanding that should be considered? Would increasing that bias to say 40ma give it more headroom? I usually have my amp biased around 34ma. It seems the D amps prefer a bias setting on the cool side of things. ![]() ![]() Would it be safe to assume that the highest headroom before breakup would be in the middle of those two extremes? In my amp I have a plate voltage around 440v.Īgreeing that the highest I should bias these tubes is 70% of the dissipation and at the other end at 50% I get a range of around 34ma to 47ma. ![]() ![]() So take a 6L6GC that has a 30 watt dissipation rating. What I have been able to glean is that a power tube has a sort of bell curve in it's bias range.Īccording to what I have read that the highest headroom is in the middle of that curve which I guess would be in the 50% range. There are a few conflicting opinions out there and it seems that the Hi Fi crowd has a bit of a different opinion verses the guitar amp crowd. I've been researching the relationship between the bias current relationship of power tubes (6L6) and headroom. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |