Controllable summarization models are typically limited only to a short text, such as a topic mention, a keyword, or an entity, to control the output summary. At the same time, existing models for controllable summarization are prone to generate artificial content, resulting in unreliable summaries. In this work, we propose a method for controllable abstractive summarization that can exploit arbitrary textual context from a short text to a collection of documents to direct the focus of the generated summary. The proposed method incorporates a sentence BERT model to extract an embedding-based representation of the given context, which is then used to tag the most representative words of the input document towards this context. In addition, we propose an unsupervised metric to evaluate the faithfulness of the topic-oriented sentences of the generated summaries with respect to the input document. Experimental results under different zero-shot setups demonstrate that the proposed method surpasses both state-of-the-art large language models (LLMs) and controllable summarization methods. The generated summaries are both reliable and relevant with respect to the input document.