A value in the computer may be represented in a form that favors efficient computation. We then need a mechanism to covert representations to characters we can read.
Numbers in the computer are often represented by logical bits, each representing successive powers of two.
In literal form 100 represents the bit pattern that sums the binary powers 26+25+22 = 100.
In literal form "100" represents the three characters "1", "0" and "0" composed as a string.
Many languages convert to string representations automatically on output.
console.log 100, "100"
Many languages convert to string representations when variables have been typed as strings.
var s: String; s := 100;
Many languages have functions or methods that convert to strings numbers to strings.
Floating point reserves a variable number of bits for the fraction part of a real number. It remains the responsibility of the programmer to know how many of these bits represent reliable measurement.
Number to string conversion functions often take an argument that specifies the number of trusted/desired digits to the right of the decimal point when a floating point number is converted to a string.
Object-oriented programming languages often expect class definitions to include a method to convert any object of that class to a string.
For objects that have a literal representation, default to string methods often write that format.
Point.fromUser.asString = "150@230"
(150@230).class = Point