AI models have recently achieved astonishing results and are consequently employed in a fast-growing number of applications. However, since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer from degenerated and biased human behavior. This behavior is human-like, and the model only reflects its training data. Filtering training data leads to a performance decrease. Therefore, models are fine-tuned, e.g., by RLHF, to align with human values. However, the question of which values should be reflected and how a model should behave in different contexts remains unresolved. In this talk, we will look at controllable generative AI systems and present ways to align these models without fine-tuning them. Specifically, we present strategies to attenuate biases after deploying generative text-to-image models.