Sentiment analysis in legal documents presents significant challenges due to the intricate structure, domain-specific terminology, and strong contextual dependencies inherent in legal texts. In this study, a novel hybrid framework is proposed, integrating Graph Attention Networks (GATs) with domain-specific embeddings, i.e., Legal Bidirectional Encoder Representations from Transformers (LegalBERT) and an aspect-oriented sentiment classification approach to improve both predictive accuracy and interpretability. Unlike conventional deep learning models, the proposed method explicitly captures hierarchical relationships within legal texts through GATs while leveraging LegalBERT to enhance domain-specific semantic representation. Additionally, auxiliary features, including positional information and topic relevance, were incorporated to refine sentiment predictions. A comprehensive evaluation conducted on diverse legal datasets demonstrates that the proposed model achieves state-of-the-art performance, attaining an accuracy of 93.1% and surpassing existing benchmarks by a significant margin. Model interpretability was further enhanced through SHapley Additive exPlanations (SHAP) and Legal Context Attribution Score (LCAS) techniques, which provide transparency into decision-making processes. An ablation study confirms the critical contribution of each model component, while scalability experiments validate the model’s efficiency across datasets ranging from 10,000 to 200,000 sentences. Despite increased computational demands, strong robustness and scalability are exhibited, making this framework suitable for large-scale legal applications. Future research will focus on multilingual adaptation, computational optimization, and broader applications within the field of legal analytics.