Reading through the other thread, I can see that you're very frustrated that Bose doesn't list a nominal input level. I apologize that this spec. isn't listed.
I will give you the details that I do have available to me with a brief explanation first.
The L1 is a power amplifier. ( I know you know this, but I want to emphasize it. ) The Analog Input on the power stand is a sensitivity control. The L1 also doesn’t have both an input and output knob. It’s an amp with one single sensitivity control.
As a sensitivity control, how the input voltage turns into output power is the crux of the matter. If you send the L1 a signal from a device referenced to +4dBu, and then put the trim pot to a specific unity position, then that setting may or may not provide an appropriate SPL for the room. That’s why gain range is published and not a nominal input level. It all comes down to staying within an acceptable input range and then turning that into an appropriate output power for the room. Whether a spec is published or not, the L1 is designed to work with standard equipment. This includes devices at +4dBu and devices at -10dBV.
Here’s what I can say: Here in the office, we played a 1kHz signal through a mixer with a +4dBu output. We set the mixer to unity. With this input signal, the LED on the power stand was just about to hit red at the center position on the trim pot. So that’s roughly full power at the center point of the L1’s trim.
Nick, I am fine with a senesativity at unity specification. At unity what is the input sensativity of the L1 Model II amplifier? Unity is not a variable here so sensitivity should be easy to nail down and I'm sure Bose engineers already know the answer to this question. The sensativity at unity of a Mackie TH15A amplified speker (also with one input and only one wolume knob) for instance is +4dB.
Sorry for the delay in getting you an answer from Engineering.
The balanced input of the L1 model II, at center volume control position, will give you max power out for an input of about -10 dBV.
In that sense (sensitivity, defined as the input that gives you rated output) the input is similar to the consumer standard.
The other way that these conventions for nominal levels(+4 dBu or -10 dBV) are used, is to understand the headroom above the signal, or how hot a signal you can put into the input without clipping the input.
The L1 MII is compatible with Pro-level output devices. Say you run a Mackie mixer with nominal +4 dBu output and peaks up to +24 dBu. You can connect this to the L1 MII, then turn down the L1 MII pot until your acoustic peaks are at the max level you want.
Thank you for replying. Possibly this answers my question with some clarification. Do you mean that unity is at 12:00 and the nominal line level input at unity is -10dB? It really explains nothing about how the sensativity circuit/knob is amplifying or attenuating a signal to match the L1's own internal nominal input at unity unless we know what that internal level is, secondarily it is helpful but not entirely necessary to know where unity is on the dial. If the answer is 12:00 and -10db then fine. I'm not suggesting 9:00 and +4dB would be better, only that it is helpful to know the nominal input at unity; whatever and wherever that is. If this is the case, it tells us that the L1's internal nominal input is -10dB and the L1 matches consumer standard -10 dB inputs at unity (neither amplifying or attenuating them) and that it matches professional standard +4dB inputs by attenuating or "padding" them by 14dB.
You are expressing it pretty well, the L1 matches consumer inputs at 12:00, and pro inputs somewhat counter-clockwise from there.
I think you made a good point in earlier posts that the optimal level is important.
An example of a bad case is putting too low a signal into a piece of equipment, and having to turn the gain way up. That can increase the noise level significantly from the added gain.
On the other end, if your input is too hot, the main issue in my experience is avoiding clipping the input, as opposed to signal-to-noise issues. Your experience may be different.
If you have a pro signal source in front of the L1, I don't know of any performance advantage to using a transformer over turning down the level control. To optimize signal to noise it may even be slightly preferable to use the level control on the unit instead of a transformer.
Do you have any other questions on specific use in your application that I can answer?
No Bill, a confirmation of a the nominal input or "sensitivity" at unity would answer my question.
Just knowing that the L1 "matches" this input value at this setting or "matches" that value at that setting tells us nothing about whether the L1 is attenuating or boosting the signal to do so. Only knowing the input level where the input is exactly the same as the L1's internal level and the dial is at unity (the l1 is neither boosting nor cutting the input to match) will tell us when the L1 is amplifying or attenuating any other input value in it's range.
You may be right about attenuation being preferable to a transformer but that depends on the transformer and the attenuator and the range between the signals being matched. A line out from a console back in to the input of a guitar amp is certainly one case where one may want to A/B a transformer to attenuation. My point in asking the question in the first place and in continuing to stand behind that question is that without knowing where unity is and what the nominal input is, then how would one have any idea what the circuit is doing and what to consider A/B'ing? Why would one consider inserting a transformer between two signals that are perfectly matched at unity? Now if we do not know where unity is or what the nominal level is how could we even consider this?
So I must still ask for clarification: I understand that you are now telling me that a -10 input produces "matches" with the sensitivity control set at 12:00 but is 12:00 unity and is the nominal input or sensitivity at unity -10dB?