Generative models for tabular data face a long-standing challenge in the effective modelling of heterogeneous feature interrelationships, especially for generating tabular data with both continuous and categorical input features. Capturing these interrelationships is crucial as it allows models to understand complex patterns and dependencies that exist in the underlying data. A promising option to address the challenge is to devise suitable encoding/embedding schemes for the input features before the generative modelling process. However, prior methods often rely on either suboptimal heuristics such as one-hot encoding of discrete features and separated modelling of discrete/continuous features, or latent space generative models. Instead, our proposed solution leverages efficient continuous encodings to unify the data space and applies a single generative process across all the encodings jointly, thereby efficiently capturing heterogeneous feature interrelationships. Specifically, it employs encoding schemes such as Analog Bits or Dictionary Encoding that effectively convert discrete features into continuous ones. Extensive experiments on real-world and synthetic tabular datasets comprising of heterogeneous features demonstrate that our encoding schemes, combined with Flow Matching as the generative model, significantly enhances model capabilities. Our models, TabUnite-i2bFlow and TabUnite-dicFlow, are able to address data heterogeneity, achieving superior performances across a broad suite of datasets, baselines, and benchmarks while generating accurate, robust, and diverse tabular data.