Spurred by a huge interest in the post-Shannon communication, it has recently been shown that leveraging semantics can significantly improve the communication effectiveness across many tasks. In this article, inspired by human communication, we propose a novel stochastic model of System 1 semantics-native communication (SNC) for generic tasks, where a speaker has an intention of referring to an entity, extracts the semantics, and communicates its symbolic representation to a target listener. To further reach its full potential, we additionally infuse contextual reasoning into SNC such that the speaker locally and iteratively self-communicates with a virtual agent built on the physical listeners unique way of coding its semantics, i.e., communication context. The resultant System 2 SNC allows the speaker to extract the most effective semantics for its listener. Leveraging the proposed stochastic model, we show that the reliability of System 2 SNC increases with the number of meaningful concepts, and derive the expected semantic representation (SR) bit length which quantifies the extracted effective semantics. It is also shown that System 2 SNC significantly reduces the SR length without compromising communication reliability.