Why use indicators instead of some special value to discover that something is null?
Some programmers are used to indicating the null value by using some special (which means: “unlikely” to be ever used) value – for example, to use the smallest integer value to indicate null integer. Or to use empty string to indicate null string. And so on. We think that it’s completely wrong. Null (in the database sense) is an information about the data. It describes the state of the data and if it’s null, then there’s no data at all. Nothing. Null. It does not make any sense to talk about some special value if in fact there is no value at all – especially if we take into account that, for example, the smallest integer value (or whatever else you choose as the “special” value) might not be that special in the given application or domain. Thus, SOCI uses a separate indicators to describe the state of exchanged data. It also has an additional benefit of allowing the library to convey more than two states (null and not null). Indeed, the SOCI library uses indicators also to report that
Related Questions
- Are non-absorbable indicators of comparable value in the human stomach made abnormal by taurocholic acid?
- Are there any special requirements for participating in an introductory/discover scuba diving tour?
- Are there any special requirements for participating in an introductory/discover scuba diving tour?
- Why use indicators instead of some special value to discover that something is null?
- HOW TO DEFINE A NULL VALUE?
- HOW TO DEFINE A NULL VALUE?