Not sure what you mean by "adding to".
IE
- you train a lora on 20 images, then later you select that lora as starting point and continue the training with
- 20 new images OR
- the 20 old images along with 20 new ones (totaling 40 images)
1.2 would probably better off just retraining from scratch since doubling the amount of images would potentially change your learning quite a bit.
With 1.1 i suspect it would depend on how images were captioned, but i think it would "confuse" the AI. Generally the training picks up little to noting of the background unless there's "room left" to do so and "there's nothing left to learn" with the subject. Your images should have as different backgrounds as possible in your training set anyway which makes it less likely for the AI to associate background elements with the subject you're training.
As for multiple subjects in one lora, that should work just fine, assuming you can keep it all within the "size limits" of the lora. You have loras with subject wearing different clothing or styles, i can't remember seeing any with completely different people, but the training setup for it would be the same. The different image sets would just be in different folders and tagged accordingly.
(offtopic: One question is how "efficient" weights are being stored in the lora. IE if you have one character with and without a hat, would it then store just the differences between them or would it be stored as 2 complete "weight sets")
You'd probably need to make sure your sets are rather close in how fast the AI picks them up though, you can probably fix some of it with having different repeat counts for each set up just because repeats * images is the same it doesn't mean it'll learn at the same.
Through testing it seems that even running the same set at something like 4 repeats for 4 epochs doesn't give the same end result as doing 2 reps for 8 epochs. This could be due to "something" in the code itself that make it behave slightly different, i'd rather not dig into all that. If you're lucky enough to do batches as well, that throws in another curve ball apparently, anyway....
I'd planed to put multiple subjects into one lora but not gotten around to it yet as i were struggling to find a common setting so they'd all learn at roughly the same rate. I considered training one lora, on one set then "append" a new set to it, i started on it with the lora i posted a while back, but an update broke things so i can't even launch the trainer and i've not gotten back to it yet.
I remember seeing something about lora merging at the time too but not sure how that works