Beyond Single Concept Vector: Modeling Concept Subspace in LLMs with Gaussian Distribution

New Jersey Institute of Technology, Wake Forest University, § Cisco Research

Abstract

Probing learned concepts in large language models (LLMs) is crucial for understanding how semantic knowledge is encoded internally. Training linear classifiers on probing tasks is a principle approach to denote the vector of a certain concept in the representation space. However, the single vector identified for a concept varies with both data and training, making it less robust and weakening its effectiveness in real-world applications. To address this challenge, we propose an approach to approximate the subspace representing a specific concept. Built on linear probing classifiers, we extend the concept vectors into Gaussian Concept Subspace (GCS). We demonstrate GCS’s effectiveness through measuring its faithfulness and plausibility across multiple LLMs with different sizes and architectures. Additionally, we use representation intervention tasks to showcase its efficacy in real-world applications such as emotion steering. Experimental results indicate that GCS concept vectors have the potential to balance steering performance and maintaining the fluency in natural language generation tasks.

GCS Framework

Evaluation

RQ1: How faithfully do GCS-sampled concept vectors represent the original concepts?(Faithfulness)

  • Experiment 1
Llama-2-7B, layer 30 Gemma-7B, layer 26 Llama-2-13B, layer 38
Llama-2-7B, layer 30 Gemma-2-7B, layer 26 Llama-2-13B, layer 38

Figure: Histogram of cosine similarity within observed concept vectors, sampled concept vectors, and between both sets for concept "Bird".

Llama-2-7B, layer 30 Gemma-7B, layer 26 Llama-2-13B, layer 38
Llama-2-7B Gemma-2-7B Llama-2-13B

Figure: Layer-wise average cosine similarity from the second layer to the penultimate layer of observed concept vectors, sampled concept vectors, and between both sets for concept "Bird".

  • Experiment 2
Llama-2-7B, layer 30 Gemma-7B, layer 26 Llama-2-13B, layer 38
Llama-2-7B Gemma-2-7B Llama-2-13B

Figure: Accuracy of observed and sampled concept vectors aross varying models.

  • Conclusion
    • Observed concept vectors are similar to sampled concept vectors in representation space.
    • GCS-sampled concept vectors are more general and robust on classifying concept-related data.

RQ2: To what extent do explanations derived from GCS-sampled concept vectors align with human expectations about the hierarchies among model’s learned knowledge?(Plausibility)

  • Experiment 1
Llama-2-7B, layer 30 Gemma-7B, layer 26 Llama-2-13B, layer 38
Llama-2-7B Gemma-2-7B Llama-2-13B

Figure: Heatmap of concept average cosine similarity of 16 concepts across Llama-2-7B, Gemma-7B, and Llama-2-13B. The 16 low-level concepts are grouped into four high-level categories: the first 4 rows/columns represent sports events, the next 4 represent populated places, followed by 4 for animals, and the last 4 for movie genres.

  • Experiment 2
Llama-2-7B, layer 30 Gemma-7B, layer 26 Llama-2-13B, layer 38
Llama-2-7B Gemma-2-7B Llama-2-13B

Figure: PCA visualization of 16 concepts across Llama-2-7B, Gemma-7B, and Llama-2-13B. Low-level concepts belonging to the same high-level concept category share the same color.

  • Conclusion
    • Low-level concepts within the same high-level category typically exhibit higher similarity scores.
    • Some high-level categories, such as “Sports Event”, shows very high internal similarity across all models, suggesting these concepts are closely related in the models’ representations.
    • Some high-level categories, such as “Populated Place” and “Animal”, show stronger correlation, align with human intuition about real-world relationships between these concepts.

RQ3: Can the proposed GCS method effectively mitigate unwanted behaviors in LLMs?(Effectiveness on emotion steering)

Figure: An illustration of inference-time intervention with a LLM.

Figure: Comparison of generated texts with and without intervention.

Table 1: Evaluation of generated text after adding steering vectors.
Steering Method Strength
0.038 0.043 0.048 0.053 0.059 0.064 0.069 0.074 0.080
Mean Difference Joyfulness (Avg) ↑ 1.0200.8601.2201.1001.6531.2451.8001.7801.260
Coherence (Avg) ↓ 3.8575.4605.7405.4205.5716.3065.4404.4203.420
1 sigma Joyfulness (Avg) ↑ 1.0000.8001.2601.4902.1202.9802.2801.7762.160
Coherence (Avg) ↓ 4.7803.6804.0403.5314.8604.8576.4805.3475.800
2 sigma Joyfulness (Avg) ↑ 0.8401.5201.1431.8782.6252.4582.5202.3401.960
Coherence (Avg) ↓ 4.4203.6803.8574.0615.8546.6886.4606.3606.520
3 sigma Joyfulness (Avg) ↑ 0.8201.2201.2201.8372.4602.5712.3882.5101.854
Coherence (Avg) ↓ 3.9204.1803.5803.4495.1205.6336.2046.6335.542
4 sigma Joyfulness (Avg) ↑ 0.8400.7551.3801.9602.2802.2242.6122.7201.520
Coherence (Avg) ↓ 3.6003.8374.4003.5604.7205.8786.6536.8004.200
5 sigma Joyfulness (Avg) ↑ 0.5800.6401.4291.6402.4581.6802.3332.2242.220
Coherence (Avg) ↓ 3.4404.0405.3473.4804.7715.9005.6255.6944.980
One linear Joyfulness (Avg) ↑ 0.8401.2451.5202.0002.2922.5803.0202.4802.080
Coherence (Avg) ↓ 4.3603.3474.3804.6405.2086.0205.9186.7806.340

Note: A higher joyfulness score indicates a better steering effect. Coherence measures the repetitiveness and chaos in the generated sentences, with lower values being preferable. The best performance for each method is highlighted in bold within the table.

  • Conclusion
    • GCS-sampled concept vectors can better balance steering performance and maintaining the fluency in natural language generation tasks.

Citation