Text-to-motion generation has considerably advanced with part-based autoregressive models. Traditional unidirectional approaches are limited by their inability to access future tokens, leading to constrained temporal coherence and suboptimal motion quality. Furthermore, autoregressive models are challenging to apply to motion editing tasks. Recently, bidirectional autoregressive models have been proposed, integrating past and future contexts to enhance consistency. In this work, we introduce the first model to combine part-based generation with bidirectional autoregressive methods. This approach leverages detailed control over individual parts alongside rich temporal context, with the added advantage of applicability to motion editing tasks. However, it can cause parts to rely too heavily on each other, as each part must account for expanded contextual information. This reliance can result in tangled motion sequences and compounding small errors in both directions along the sequence. To resolve these issues, we propose Partial Occlusion, a stochastic training technique that probabilistically occludes specific motion part information, encouraging the model to learn robust representations under partial context. We combine these contributions into BiPO. Our model achieves superior performance in FID on HumanML3D compared to previous part-based methods and sets a new state-of-the-art.