API Reference
JSON.JSONText
— TypeJSON.JSONText
Wrapper around a string containing JSON data. Can be used to insert raw JSON in JSON output, like:
json(JSONText("{"key": "value"}"))
This will output the JSON as-is, without escaping. Note that no check is done to ensure that the JSON is valid.
Can also be used to read "raw JSON" when parsing, meaning no specialized structure (JSON.Object, Vector{Any}, etc.) is created. Example:
x = JSON.parse("[1,2,3]", JSONText)
# x.value == "[1,2,3]"
JSON.LazyValue
— TypeJSON.LazyValue
A lazy representation of a JSON value. The LazyValue
type supports the "selection" syntax for lazily navigating the JSON value. Lazy values can be materialized via JSON.parse(x)
, JSON.parse(x, T)
, or JSON.parse!(x, y)
.
JSON.isvalidjson
— FunctionJSON.isvalidjson(json) -> Bool
Check if the given JSON is valid. This function will return true
if the JSON is valid, and false
otherwise. Inputs can be a string, a vector of bytes, or an IO stream, the same inputs as supported for JSON.lazy
and JSON.parse
.
JSON.json
— FunctionJSON.json(x) -> String
JSON.json(io, x)
JSON.json(file_name, x)
Serialize x
to JSON format. The 1st method takes just the object and returns a String
. In the 2nd method, io
is an IO
object, and the JSON output will be written to it. For the 3rd method, file_name
is a String
, a file will be opened and the JSON output will be written to it.
All methods accept the following keyword arguments:
omit_null::Union{Bool, Nothing}=nothing
: Controls whether struct fields that are undefined or arenothing
are included in the JSON output. Iftrue
, only non-null fields are written. Iffalse
, all fields are included regardless of being undefined ornothing
. Ifnothing
, the behavior is determined byJSON.omit_null(::Type{T})
, which isfalse
by default.omit_empty::Union{Bool, Nothing}=nothing
: Controls whether struct fields that are empty are included in the JSON output. Iftrue
, empty fields are excluded. Iffalse
, empty fields are included. Ifnothing
, the behavior is determined byJSON.omit_empty(::Type{T})
.allownan::Bool=false
: Iftrue
, allowInf
,-Inf
, andNaN
in the output. Iffalse
, throw an error ifInf
,-Inf
, orNaN
is encountered.jsonlines::Bool=false
: Iftrue
, input must be array-like and the output will be written in the JSON Lines format, where each element of the array is written on a separate line (i.e. separated by a single newline character `
). If
false`, the output will be written in the standard JSON format.
pretty::Union{Integer,Bool}=false
: Controls pretty printing of the JSON output. Iftrue
, the output will be pretty-printed with 2 spaces of indentation. If an integer, it will be used as the number of spaces of indentation. Iffalse
or0
, the output will be compact. Note: Pretty printing is not supported whenjsonlines=true
.inline_limit::Int=0
: For arrays shorter than this limit, pretty printing will be disabled (indentation set to 0).ninf::String="-Infinity"
: Custom string representation for negative infinity.inf::String="Infinity"
: Custom string representation for positive infinity.nan::String="NaN"
: Custom string representation for NaN.float_style::Symbol=:shortest
: Controls how floating-point numbers are formatted. Options are::shortest
: Use the shortest representation that preserves the value:fixed
: Use fixed-point notation:exp
: Use exponential notation
float_precision::Int=1
: Number of decimal places to use whenfloat_style
is:fixed
or:exp
.bufsize::Int=2^22
: Buffer size in bytes for IO operations. When writing to IO, the buffer will be flushed to the IO stream once it reaches this size. This helps control memory usage during large write operations. Default is 4MB (2^22 bytes). This parameter is ignored when returning a String.style::JSONStyle=JSONWriteStyle()
: Custom style object that controls serialization behavior. This allows customizing certain aspects of serialization, like defining a customlower
method for a non-owned type. Likestruct MyStyle <: JSONStyle end
,JSON.lower(x::Rational) = (num=x.num, den=x.den)
, then callingJSON.json(1//3; style=MyStyle())
will output{"num": 1, "den": 3}
.
By default, x
must be a JSON-serializable object. Supported types include:
AbstractString
=> JSON string: types must support theAbstractString
interface, specifically with support forncodeunits
andcodeunit(x, i)
.Bool
=> JSON boolean: must betrue
orfalse
Nothing
=> JSON null: must be thenothing
singleton valueNumber
=> JSON number:Integer
subtypes orUnion{Float16, Float32, Float64}
have default implementations for otherNumber
types,JSON.tostring
is first called to convert the value to aString
before being written directly to JSON outputAbstractArray
/Tuple
/AbstractSet
=> JSON array: objects for whichJSON.arraylike
returnstrue
are output as JSON arrays.arraylike
is defined by default forAbstractArray
,AbstractSet
,Tuple
, andBase.Generator
. For other types that define, they must also properly implementStructUtils.applyeach
to iterate over the index => elements pairs. Note that arrays with dimensionality > 1 are written as nested arrays, withN
nestings forN
dimensions, and the 1st dimension is always the innermost nested JSON array (column-major order).AbstractDict
/NamedTuple
/structs => JSON object: if a value doesn't fall into any of the above categories, it is output as a JSON object.StructUtils.applyeach
is called, which has appropriate implementations forAbstractDict
,NamedTuple
, and structs, where field names => values are iterated over. Field names can be output with an alternative name via field tag overload, likefield::Type &(json=(name="alternative_name",),)
If an object is not JSON-serializable, an override for JSON.lower
can be defined to convert it to a JSON-serializable object. Some default lower
defintions are defined in JSON itself, for example:
StructUtils.lower(::Missing) = nothing
StructUtils.lower(x::Symbol) = String(x)
StructUtils.lower(x::Union{Enum, AbstractChar, VersionNumber, Cstring, Cwstring, UUID, Dates.TimeType}) = string(x)
StructUtils.lower(x::Regex) = x.pattern
These allow common Base/stdlib types to be serialized in an expected format.
Circular references are tracked automatically and cycles are broken by writing null
for any children references.
For pre-formatted JSON data as a String, use JSONText(json)
to write the string out as-is.
For AbstractDict
objects with non-string keys, StructUtils.lowerkey
will be called before serializing. This allows aggregate or other types of dict keys to be converted to an appropriate string representation. See StructUtils.liftkey
for the reverse operation, which is called when parsing JSON data back into a dict type.
NOTE: JSON.json
should not be overloaded directly by custom types as this isn't robust for various output options (IO, String, etc.) nor recursive situations. Types should define an appropriate JSON.lower
definition instead.
NOTE: JSON.json(str, indent::Integer)
is special-cased for backwards compatibility with pre-1.0 JSON.jl, as this typically would mean "write out the indent
integer to file str
". As writing out a single integer to a file is extremely rare, it was decided to keep the pre-1.0 behavior for compatibility reasons.
Examples:
using Dates
abstract type AbstractMonster end
struct Dracula <: AbstractMonster
num_victims::Int
end
struct Werewolf <: AbstractMonster
witching_hour::DateTime
end
struct Percent <: Number
value::Float64
end
JSON.lower(x::Percent) = x.value
StructUtils.lowerkey(x::Percent) = string(x.value)
@noarg mutable struct FrankenStruct
id::Int
name::String # no default to show serialization of an undefined field
address::Union{Nothing, String} = nothing
rate::Union{Missing, Float64} = missing
type::Symbol = :a &(json=(name="franken_type",),)
notsure::Any = JSON.Object("key" => "value")
monster::AbstractMonster = Dracula(10) &(json=(lower=x -> x isa Dracula ? (monster_type="vampire", num_victims=x.num_victims) : (monster_type="werewolf", witching_hour=x.witching_hour),),)
percent::Percent = Percent(0.5)
birthdate::Date = Date(2025, 1, 1) &(json=(dateformat="yyyy/mm/dd",),)
percentages::Dict{Percent, Int} = Dict{Percent, Int}(Percent(0.0) => 0, Percent(1.0) => 1)
json_properties::JSONText = JSONText("{"key": "value"}")
matrix::Matrix{Float64} = [1.0 2.0; 3.0 4.0]
extra_field::Any = nothing &(json=(ignore=true,),)
end
franken = FrankenStruct()
franken.id = 1
json = JSON.json(franken; omit_null=false)
# "{"id":1,"name":null,"address":null,"rate":null,"franken_type":"a","notsure":{"key":"value"},"monster":{"monster_type":"vampire","num_victims":10},"percent":0.5,"birthdate":"2025/01/01","percentages":{"1.0":1,"0.0":0},"json_properties":{"key": "value"},"matrix":[[1.0,3.0],[2.0,4.0]]}"
A few comments on the JSON produced in the example above:
- The
name
field was#undef
, and thus was serialized asnull
. - The
address
andrate
fields werenothing
andmissing
, respectively, and thus were serialized asnull
. - The
type
field has aname
field tag, so the JSON key for this field isfranken_type
instead oftype
. - The
notsure
field is aJSON.Object
, so it is serialized as a JSON object. - The
monster
field is aAbstractMonster
, which is a custom type. It has alower
field tag that specifies how the value of this field specifically (not all AbstractMonster) should be serialized - The
percent
field is aPercent
, which is a custom type. It has alower
method that specifies howPercent
values should be serialized - The
birthdate
field has adateformat
field tag, so the value follows the format (yyyy/mm/dd
) instead of the default date ISO format (yyyy-mm-dd
) - The
percentages
field is aDict{Percent, Int}
, which is a custom type. It has alowerkey
method that specifies howPercent
keys should be serialized as strings - The
json_properties
field is aJSONText
, so the JSONText value is serialized as-is - The
matrix
field is aMatrix{Float64}
, which is a custom type. It is serialized as a JSON array, with the first dimension being the innermost nested JSON array (column-major order) - The
extra_field
field has aignore
field tag, so it is skipped when serializing
JSON.lazy
— FunctionJSON.lazy(json; kw...)
JSON.lazyfile(file; kw...)
Detect the initial JSON value in json
, returning a JSON.LazyValue
instance. json
input can be:
AbstractString
AbstractVector{UInt8}
IO
,IOStream
,Cmd
(bytes are fully read into aVector{UInt8}
for parsing, i.e.read(json)
is called)
lazyfile
is a convenience method that takes a filename and opens the file before calling lazy
.
The JSON.LazyValue
supports the "selection" syntax for lazily navigating the JSON value. For example (x = JSON.lazy(json)
):
x.key
,x[:key]
orx["key"]
for JSON objectsx[1]
,x[2:3]
,x[end]
for JSON arrayspropertynames(x)
to see all keys in the JSON objectx.a.b.c
for selecting deeply nested valuesx[~, (k, v) -> k == "foo"]
for recursively searching for key "foo" and return matching values
NOTE: Selecting values from a LazyValue
will always return a LazyValue
. Selecting a specific key of an object or index of an array will only parse what is necessary before returning. This leads to a few conclusions about how to effectively utilize LazyValue
:
JSON.lazy
is great for one-time access of a value in JSON- It's also great for finding a required deeply nested value
- It's not great for any case where repeated access to values is required; this results in the same JSON being parsed on each access (i.e. naively iterating a lazy JSON array will be O(n^2))
- Best practice is to use
JSON.lazy
sparingly unless there's a specific case where it will benefit; or useJSON.lazy
as a means to access a value that is then fully materialized
Another option for processing JSON.LazyValue
is calling foreach(f, x)
which is defined on JSON.LazyValue
for JSON objects and arrays. For objects, f
should be of the form f(kv::Pair{String, LazyValue})
where kv
is a key-value pair, and for arrays, f(v::LazyValue)
where v
is the value at the index. This allows for iterating over all key-value pairs in an object or all values in an array without materializing the entire structure.
Lazy values can be materialized via JSON.parse
in a few different forms:
JSON.parse(json)
: Default materialization intoJSON.Object
(a Dict-like type),Vector{Any}
, etc.JSON.parse(json, T)
: Materialize into a user-provided typeT
(following rules/programmatic construction from StructUtils.jl)JSON.parse!(json, x)
: Materialize into an existing objectx
(following rules/programmatic construction from StructUtils.jl)
Thus for completeness sake, here's an example of ideal usage of JSON.lazy
:
x = JSON.lazy(very_large_json_object)
# find a deeply nested value
y = x.a.b.c.d.e.f.g
# materialize the value
z = JSON.parse(y)
# now mutate/repeatedly access values in z
In this example, we only parsed as much of the very_large_json_object
as was required to find the value y
. Then we fully materialized y
into z
, which is now a normal Julia object. We can now mutate or access values in z
.
Currently supported keyword arguments include:
allownan::Bool = false
: whether "special" float values shoudl be allowed while parsing (NaN
,Inf
,-Inf
); these values are specifically not allowed in the JSON spec, but many JSON libraries allow reading/writingninf::String = "-Infinity"
: the string that will be used to parse-Inf
ifallownan=true
inf::String = "Infinity"
: the string that will be used to parseInf
ifallownan=true
nan::String = "NaN"
: the string that will be sued to parseNaN
ifallownan=true
jsonlines::Bool = false
: whether the JSON input should be treated as an implicit array, with newlines separating individual JSON elements with no leading'['
or trailing']'
characters. Common in logging or streaming workflows. Defaults totrue
when used withJSON.parsefile
and the filename extension is.jsonl
orndjson
. Note this ensures that parsing will always return an array at the root-level.
Note that validation is only fully done on null
, true
, and false
, while other values are only lazily inferred from the first non-whitespace character:
'{'
: JSON object'['
: JSON array'"'
: JSON string'0'
-'9'
or'-'
: JSON number
Further validation for these values is done later when materialized, like JSON.parse
, or via selection syntax calls on a LazyValue
.
JSON.lazyfile
— FunctionJSON.lazy(json; kw...)
JSON.lazyfile(file; kw...)
Detect the initial JSON value in json
, returning a JSON.LazyValue
instance. json
input can be:
AbstractString
AbstractVector{UInt8}
IO
,IOStream
,Cmd
(bytes are fully read into aVector{UInt8}
for parsing, i.e.read(json)
is called)
lazyfile
is a convenience method that takes a filename and opens the file before calling lazy
.
The JSON.LazyValue
supports the "selection" syntax for lazily navigating the JSON value. For example (x = JSON.lazy(json)
):
x.key
,x[:key]
orx["key"]
for JSON objectsx[1]
,x[2:3]
,x[end]
for JSON arrayspropertynames(x)
to see all keys in the JSON objectx.a.b.c
for selecting deeply nested valuesx[~, (k, v) -> k == "foo"]
for recursively searching for key "foo" and return matching values
NOTE: Selecting values from a LazyValue
will always return a LazyValue
. Selecting a specific key of an object or index of an array will only parse what is necessary before returning. This leads to a few conclusions about how to effectively utilize LazyValue
:
JSON.lazy
is great for one-time access of a value in JSON- It's also great for finding a required deeply nested value
- It's not great for any case where repeated access to values is required; this results in the same JSON being parsed on each access (i.e. naively iterating a lazy JSON array will be O(n^2))
- Best practice is to use
JSON.lazy
sparingly unless there's a specific case where it will benefit; or useJSON.lazy
as a means to access a value that is then fully materialized
Another option for processing JSON.LazyValue
is calling foreach(f, x)
which is defined on JSON.LazyValue
for JSON objects and arrays. For objects, f
should be of the form f(kv::Pair{String, LazyValue})
where kv
is a key-value pair, and for arrays, f(v::LazyValue)
where v
is the value at the index. This allows for iterating over all key-value pairs in an object or all values in an array without materializing the entire structure.
Lazy values can be materialized via JSON.parse
in a few different forms:
JSON.parse(json)
: Default materialization intoJSON.Object
(a Dict-like type),Vector{Any}
, etc.JSON.parse(json, T)
: Materialize into a user-provided typeT
(following rules/programmatic construction from StructUtils.jl)JSON.parse!(json, x)
: Materialize into an existing objectx
(following rules/programmatic construction from StructUtils.jl)
Thus for completeness sake, here's an example of ideal usage of JSON.lazy
:
x = JSON.lazy(very_large_json_object)
# find a deeply nested value
y = x.a.b.c.d.e.f.g
# materialize the value
z = JSON.parse(y)
# now mutate/repeatedly access values in z
In this example, we only parsed as much of the very_large_json_object
as was required to find the value y
. Then we fully materialized y
into z
, which is now a normal Julia object. We can now mutate or access values in z
.
Currently supported keyword arguments include:
allownan::Bool = false
: whether "special" float values shoudl be allowed while parsing (NaN
,Inf
,-Inf
); these values are specifically not allowed in the JSON spec, but many JSON libraries allow reading/writingninf::String = "-Infinity"
: the string that will be used to parse-Inf
ifallownan=true
inf::String = "Infinity"
: the string that will be used to parseInf
ifallownan=true
nan::String = "NaN"
: the string that will be sued to parseNaN
ifallownan=true
jsonlines::Bool = false
: whether the JSON input should be treated as an implicit array, with newlines separating individual JSON elements with no leading'['
or trailing']'
characters. Common in logging or streaming workflows. Defaults totrue
when used withJSON.parsefile
and the filename extension is.jsonl
orndjson
. Note this ensures that parsing will always return an array at the root-level.
Note that validation is only fully done on null
, true
, and false
, while other values are only lazily inferred from the first non-whitespace character:
'{'
: JSON object'['
: JSON array'"'
: JSON string'0'
-'9'
or'-'
: JSON number
Further validation for these values is done later when materialized, like JSON.parse
, or via selection syntax calls on a LazyValue
.
JSON.omit_empty
— MethodJSON.omit_empty(::Type{T})::Bool
JSON.omit_empty(::JSONStyle, ::Type{T})::Bool
Controls whether struct fields that are empty are included in the JSON output. Returns false
by default, meaning empty fields are included. To instead exclude empty fields, set this to true
. A field is considered empty if it is nothing
, an empty collection (empty array, dict, string, tuple, or named tuple), or missing
. This can also be controlled via the omit_empty
keyword argument in JSON.json
.
# Override for a specific type
JSON.omit_empty(::Type{MyStruct}) = true
# Override for a custom style
struct MyStyle <: JSON.JSONStyle end
JSON.omit_empty(::MyStyle, ::Type{T}) where {T} = true
JSON.omit_null
— MethodJSON.omit_null(::Type{T})::Bool
JSON.omit_null(::JSONStyle, ::Type{T})::Bool
Controls whether struct fields that are undefined or are nothing
are included in the JSON output. Returns false
by default, meaning all fields are included, regardless of undef or nothing
. To instead ensure only non-null fields are written, set this to true
. This can also be controlled via the omit_null
keyword argument in JSON.json
.
# Override for a specific type
JSON.omit_null(::Type{MyStruct}) = true
# Override for a custom style
struct MyStyle <: JSON.JSONStyle end
JSON.omit_null(::MyStyle, ::Type{T}) where {T} = true
JSON.parse
— FunctionJSON.parse(json)
JSON.parse(json, T)
JSON.parse!(json, x)
JSON.parsefile(filename)
JSON.parsefile(filename, T)
JSON.parsefile!(filename, x)
Parse a JSON input (string, vector, stream, LazyValue, etc.) into a Julia value. The parsefile
variants take a filename, open the file, and pass the IOStream
to parse
.
Currently supported keyword arguments include:
allownan
: allows parsingNaN
,Inf
, and-Inf
since they are otherwise invalid JSONninf
: string to use for-Inf
(default:"-Infinity"
)inf
: string to use forInf
(default:"Infinity"
)nan
: string to use forNaN
(default:"NaN"
)jsonlines
: treat thejson
input as an implicit JSON array, delimited by newlines, each element being parsed from each row/line in the inputdicttype
: a customAbstractDict
type to use instead ofJSON.Object{String, Any}
as the default type for JSON object materializationnull
: a custom value to use for JSON null values (default:nothing
)style
: a customStructUtils.StructStyle
subtype instance to be used in calls toStructUtils.make
andStructUtils.lift
. This allows overriding default behaviors for non-owned types.
The methods without a type specified (JSON.parse(json)
, JSON.parsefile(filename)
), do a generic materialization into predefined default types, including:
- JSON object =>
JSON.Object{String, Any}
(see note below) - JSON array =>
Vector{Any}
- JSON string =>
String
- JSON number =>
Int64
,BigInt
,Float64
, orBigFloat
- JSON true =>
true
- JSON false =>
false
- JSON null =>
nothing
When a type T
is specified (JSON.parse(json, T)
, JSON.parsefile(filename, T)
), materialization to a value of type T
will be attempted utilizing machinery and interfaces provided by the StructUtils.jl package, including:
- For JSON objects, JSON keys will be matched against field names of
T
with a value being constructed viaT(args...)
- If
T
was defined with the@noarg
macro, an empty instance will be constructed, and field values set as JSON keys match field names - If
T
had default field values defined using the@defaults
or@kwarg
macros (from StructUtils.jl package), those will be set in the value ofT
unless different values are parsed from the JSON - If
T
was defined with the@nonstruct
macro, the struct will be treated as a primitive type and constructed using thelift
function rather than from field values - JSON keys that don't match field names in
T
will be ignored (skipped over) - If a field in
T
has aname
fieldtag, thename
value will be used to match JSON keys instead - If
T
or any recursive field type ofT
is abstract, an appropriateJSON.@choosetype T x -> ...
definition should exist for "choosing" a concrete type at runtime; default type choosing exists forUnion{T, Missing}
andUnion{T, Nothing}
where the JSON value is checked ifnull
. If theAny
type is encountered, the default materialization types will be used (JSON.Object
,Vector{Any}
, etc.) - For any non-JSON-standard non-aggregate (i.e. non-object, non-array) field type of
T
, aJSON.lift(::Type{T}, x) = ...
definition can be defined for how to "lift" the default JSON value (String, Number, Bool,nothing
) to the typeT
; a default lift definition exists, for example, forJSON.lift(::Type{Missing}, x) = missing
where the standard JSON value fornull
isnothing
and it can be "lifted" tomissing
- For any
T
or recursive field type ofT
that isAbstractDict
, non-string/symbol/integer keys will need to have aStructUtils.liftkey(::Type{T}, x))
definition for how to "lift" the JSON string key to the key type ofT
For any T
or recursive field type of T
that is JSON.JSONText
, the next full raw JSON value will be preserved in the JSONText
wrapper as-is.
For the unique case of nested JSON arrays and prior knowledge of the expected dimensionality, a target type T
can be given as an AbstractArray{T, N}
subtype. In this case, the JSON array data is materialized as an n-dimensional array, where: the number of JSON array nestings must match the Julia array dimensionality (N
), nested JSON arrays at matching depths are assumed to have equal lengths, and the length of the innermost JSON array is the 1st dimension length and so on. For example, the JSON array [[[1.0,2.0]]]
would be materialized as a 3-dimensional array of Float64
with sizes (2, 1, 1)
, when called like JSON.parse("[[[1.0,2.0]]]", Array{Float64, 3})
. Note that n-dimensional Julia arrays are written to json as nested JSON arrays by default, to enable lossless re-parsing, though the dimensionality must still be provided explicitly to the call to parse
(i.e. default parsing via JSON.parse(json)
will result in plain nested Vector{Any}
s returned).
Examples:
using Dates
abstract type AbstractMonster end
struct Dracula <: AbstractMonster
num_victims::Int
end
struct Werewolf <: AbstractMonster
witching_hour::DateTime
end
JSON.@choosetype AbstractMonster x -> x.monster_type[] == "vampire" ? Dracula : Werewolf
struct Percent <: Number
value::Float64
end
JSON.lift(::Type{Percent}, x) = Percent(Float64(x))
StructUtils.liftkey(::Type{Percent}, x::String) = Percent(parse(Float64, x))
@defaults struct FrankenStruct
id::Int = 0
name::String = "Jim"
address::Union{Nothing, String} = nothing
rate::Union{Missing, Float64} = missing
type::Symbol = :a &(json=(name="franken_type",),)
notsure::Any = nothing
monster::AbstractMonster = Dracula(0)
percent::Percent = Percent(0.0)
birthdate::Date = Date(0) &(json=(dateformat="yyyy/mm/dd",),)
percentages::Dict{Percent, Int} = Dict{Percent, Int}()
json_properties::JSONText = JSONText("")
matrix::Matrix{Float64} = Matrix{Float64}(undef, 0, 0)
end
json = """
{
"id": 1,
"address": "123 Main St",
"rate": null,
"franken_type": "b",
"notsure": {"key": "value"},
"monster": {
"monster_type": "vampire",
"num_victims": 10
},
"percent": 0.1,
"birthdate": "2023/10/01",
"percentages": {
"0.1": 1,
"0.2": 2
},
"json_properties": {"key": "value"},
"matrix": [[1.0, 2.0], [3.0, 4.0]],
"extra_key": "extra_value"
}
"""
JSON.parse(json, FrankenStruct)
# FrankenStruct(1, "Jim", "123 Main St", missing, :b, JSON.Object{String, Any}("key" => "value"), Dracula(10), Percent(0.1), Date("2023-10-01"), Dict{Percent, Int64}(Percent(0.2) => 2, Percent(0.1) => 1), JSONText("{"key": "value"}"), [1.0 3.0; 2.0 4.0])
Let's walk through some notable features of the example above:
- The
name
field isn't present in the JSON input, so the default value of"Jim"
is used. - The
address
field uses a default@choosetype
to determine that the JSON value is notnull
, so aString
should be parsed for the field value. - The
rate
field has anull
JSON value, so the default@choosetype
recognizes it should be "lifted" toMissing
, which then uses a predefinedlift
definition forMissing
. - The
type
field is aSymbol
, and has a fieldtagjson=(name="franken_type",)
which means the JSON keyfranken_type
will be used to set the field value instead of the defaulttype
field name. A defaultlift
definition forSymbol
is used to convert the JSON string value to aSymbol
. - The
notsure
field is of typeAny
, so the default object typeJSON.Object{String, Any}
is used to materialize the JSON value. - The
monster
field is a polymorphic type, and the JSON value has amonster_type
key that determines which concrete type to use. The@choosetype
macro is used to define the logic for choosing the concrete type based on the JSON input. Note that tehx
in@choosetype
is aLazyValue
, so we materialize viax.monster_type[]
in order to compare with the string"vampire"
. - The
percent
field is a custom typePercent
and theJSON.lift
defines how to construct aPercent
from the JSON value, which is aFloat64
in this case. - The
birthdate
field uses a custom date format for parsing, specified in the JSON input. - The
percentages
field is a dictionary with keys of typePercent
, which is a custom type. Theliftkey
function is defined to convert the JSON string keys toPercent
types (parses the Float64 manually) - The
json_properties
field has a type ofJSONText
, which means the raw JSON will be preserved as a String of theJSONText
type. - The
matrix
field is aMatrix{Float64}
, so the JSON input array-of-arrays are materialized as such. - The
extra_key
field is not defined in theFrankenStruct
type, so it is ignored and skipped over.
NOTE: Why use JSON.Object{String, Any}
as the default object type? It provides several benefits:
- Behaves as a drop-in replacement for
Dict{String, Any}
, so no loss of functionality - Performance! It's internal representation means memory savings and faster construction for small objects typical in JSON (vs
Dict
) - Insertion order is preserved, so the order of keys in the JSON input is preserved in
JSON.Object
- Convenient
getproperty
(i.e.obj.key
) syntax is supported, even forObject{String,Any}
key types (again ideal/specialized for JSON usage)
JSON.Object
internal representation uses a linked list, thus key lookups are linear time (O(n)). For large JSON objects, (hundreds or thousands of keys), consider using a Dict{String, Any}
instead, like JSON.parse(json; dicttype=Dict{String, Any})
.
JSON.parsefile
— FunctionJSON.parse(json)
JSON.parse(json, T)
JSON.parse!(json, x)
JSON.parsefile(filename)
JSON.parsefile(filename, T)
JSON.parsefile!(filename, x)
Parse a JSON input (string, vector, stream, LazyValue, etc.) into a Julia value. The parsefile
variants take a filename, open the file, and pass the IOStream
to parse
.
Currently supported keyword arguments include:
allownan
: allows parsingNaN
,Inf
, and-Inf
since they are otherwise invalid JSONninf
: string to use for-Inf
(default:"-Infinity"
)inf
: string to use forInf
(default:"Infinity"
)nan
: string to use forNaN
(default:"NaN"
)jsonlines
: treat thejson
input as an implicit JSON array, delimited by newlines, each element being parsed from each row/line in the inputdicttype
: a customAbstractDict
type to use instead ofJSON.Object{String, Any}
as the default type for JSON object materializationnull
: a custom value to use for JSON null values (default:nothing
)style
: a customStructUtils.StructStyle
subtype instance to be used in calls toStructUtils.make
andStructUtils.lift
. This allows overriding default behaviors for non-owned types.
The methods without a type specified (JSON.parse(json)
, JSON.parsefile(filename)
), do a generic materialization into predefined default types, including:
- JSON object =>
JSON.Object{String, Any}
(see note below) - JSON array =>
Vector{Any}
- JSON string =>
String
- JSON number =>
Int64
,BigInt
,Float64
, orBigFloat
- JSON true =>
true
- JSON false =>
false
- JSON null =>
nothing
When a type T
is specified (JSON.parse(json, T)
, JSON.parsefile(filename, T)
), materialization to a value of type T
will be attempted utilizing machinery and interfaces provided by the StructUtils.jl package, including:
- For JSON objects, JSON keys will be matched against field names of
T
with a value being constructed viaT(args...)
- If
T
was defined with the@noarg
macro, an empty instance will be constructed, and field values set as JSON keys match field names - If
T
had default field values defined using the@defaults
or@kwarg
macros (from StructUtils.jl package), those will be set in the value ofT
unless different values are parsed from the JSON - If
T
was defined with the@nonstruct
macro, the struct will be treated as a primitive type and constructed using thelift
function rather than from field values - JSON keys that don't match field names in
T
will be ignored (skipped over) - If a field in
T
has aname
fieldtag, thename
value will be used to match JSON keys instead - If
T
or any recursive field type ofT
is abstract, an appropriateJSON.@choosetype T x -> ...
definition should exist for "choosing" a concrete type at runtime; default type choosing exists forUnion{T, Missing}
andUnion{T, Nothing}
where the JSON value is checked ifnull
. If theAny
type is encountered, the default materialization types will be used (JSON.Object
,Vector{Any}
, etc.) - For any non-JSON-standard non-aggregate (i.e. non-object, non-array) field type of
T
, aJSON.lift(::Type{T}, x) = ...
definition can be defined for how to "lift" the default JSON value (String, Number, Bool,nothing
) to the typeT
; a default lift definition exists, for example, forJSON.lift(::Type{Missing}, x) = missing
where the standard JSON value fornull
isnothing
and it can be "lifted" tomissing
- For any
T
or recursive field type ofT
that isAbstractDict
, non-string/symbol/integer keys will need to have aStructUtils.liftkey(::Type{T}, x))
definition for how to "lift" the JSON string key to the key type ofT
For any T
or recursive field type of T
that is JSON.JSONText
, the next full raw JSON value will be preserved in the JSONText
wrapper as-is.
For the unique case of nested JSON arrays and prior knowledge of the expected dimensionality, a target type T
can be given as an AbstractArray{T, N}
subtype. In this case, the JSON array data is materialized as an n-dimensional array, where: the number of JSON array nestings must match the Julia array dimensionality (N
), nested JSON arrays at matching depths are assumed to have equal lengths, and the length of the innermost JSON array is the 1st dimension length and so on. For example, the JSON array [[[1.0,2.0]]]
would be materialized as a 3-dimensional array of Float64
with sizes (2, 1, 1)
, when called like JSON.parse("[[[1.0,2.0]]]", Array{Float64, 3})
. Note that n-dimensional Julia arrays are written to json as nested JSON arrays by default, to enable lossless re-parsing, though the dimensionality must still be provided explicitly to the call to parse
(i.e. default parsing via JSON.parse(json)
will result in plain nested Vector{Any}
s returned).
Examples:
using Dates
abstract type AbstractMonster end
struct Dracula <: AbstractMonster
num_victims::Int
end
struct Werewolf <: AbstractMonster
witching_hour::DateTime
end
JSON.@choosetype AbstractMonster x -> x.monster_type[] == "vampire" ? Dracula : Werewolf
struct Percent <: Number
value::Float64
end
JSON.lift(::Type{Percent}, x) = Percent(Float64(x))
StructUtils.liftkey(::Type{Percent}, x::String) = Percent(parse(Float64, x))
@defaults struct FrankenStruct
id::Int = 0
name::String = "Jim"
address::Union{Nothing, String} = nothing
rate::Union{Missing, Float64} = missing
type::Symbol = :a &(json=(name="franken_type",),)
notsure::Any = nothing
monster::AbstractMonster = Dracula(0)
percent::Percent = Percent(0.0)
birthdate::Date = Date(0) &(json=(dateformat="yyyy/mm/dd",),)
percentages::Dict{Percent, Int} = Dict{Percent, Int}()
json_properties::JSONText = JSONText("")
matrix::Matrix{Float64} = Matrix{Float64}(undef, 0, 0)
end
json = """
{
"id": 1,
"address": "123 Main St",
"rate": null,
"franken_type": "b",
"notsure": {"key": "value"},
"monster": {
"monster_type": "vampire",
"num_victims": 10
},
"percent": 0.1,
"birthdate": "2023/10/01",
"percentages": {
"0.1": 1,
"0.2": 2
},
"json_properties": {"key": "value"},
"matrix": [[1.0, 2.0], [3.0, 4.0]],
"extra_key": "extra_value"
}
"""
JSON.parse(json, FrankenStruct)
# FrankenStruct(1, "Jim", "123 Main St", missing, :b, JSON.Object{String, Any}("key" => "value"), Dracula(10), Percent(0.1), Date("2023-10-01"), Dict{Percent, Int64}(Percent(0.2) => 2, Percent(0.1) => 1), JSONText("{"key": "value"}"), [1.0 3.0; 2.0 4.0])
Let's walk through some notable features of the example above:
- The
name
field isn't present in the JSON input, so the default value of"Jim"
is used. - The
address
field uses a default@choosetype
to determine that the JSON value is notnull
, so aString
should be parsed for the field value. - The
rate
field has anull
JSON value, so the default@choosetype
recognizes it should be "lifted" toMissing
, which then uses a predefinedlift
definition forMissing
. - The
type
field is aSymbol
, and has a fieldtagjson=(name="franken_type",)
which means the JSON keyfranken_type
will be used to set the field value instead of the defaulttype
field name. A defaultlift
definition forSymbol
is used to convert the JSON string value to aSymbol
. - The
notsure
field is of typeAny
, so the default object typeJSON.Object{String, Any}
is used to materialize the JSON value. - The
monster
field is a polymorphic type, and the JSON value has amonster_type
key that determines which concrete type to use. The@choosetype
macro is used to define the logic for choosing the concrete type based on the JSON input. Note that tehx
in@choosetype
is aLazyValue
, so we materialize viax.monster_type[]
in order to compare with the string"vampire"
. - The
percent
field is a custom typePercent
and theJSON.lift
defines how to construct aPercent
from the JSON value, which is aFloat64
in this case. - The
birthdate
field uses a custom date format for parsing, specified in the JSON input. - The
percentages
field is a dictionary with keys of typePercent
, which is a custom type. Theliftkey
function is defined to convert the JSON string keys toPercent
types (parses the Float64 manually) - The
json_properties
field has a type ofJSONText
, which means the raw JSON will be preserved as a String of theJSONText
type. - The
matrix
field is aMatrix{Float64}
, so the JSON input array-of-arrays are materialized as such. - The
extra_key
field is not defined in theFrankenStruct
type, so it is ignored and skipped over.
NOTE: Why use JSON.Object{String, Any}
as the default object type? It provides several benefits:
- Behaves as a drop-in replacement for
Dict{String, Any}
, so no loss of functionality - Performance! It's internal representation means memory savings and faster construction for small objects typical in JSON (vs
Dict
) - Insertion order is preserved, so the order of keys in the JSON input is preserved in
JSON.Object
- Convenient
getproperty
(i.e.obj.key
) syntax is supported, even forObject{String,Any}
key types (again ideal/specialized for JSON usage)
JSON.Object
internal representation uses a linked list, thus key lookups are linear time (O(n)). For large JSON objects, (hundreds or thousands of keys), consider using a Dict{String, Any}
instead, like JSON.parse(json; dicttype=Dict{String, Any})
.
JSON.parsefile!
— FunctionJSON.parse(json)
JSON.parse(json, T)
JSON.parse!(json, x)
JSON.parsefile(filename)
JSON.parsefile(filename, T)
JSON.parsefile!(filename, x)
Parse a JSON input (string, vector, stream, LazyValue, etc.) into a Julia value. The parsefile
variants take a filename, open the file, and pass the IOStream
to parse
.
Currently supported keyword arguments include:
allownan
: allows parsingNaN
,Inf
, and-Inf
since they are otherwise invalid JSONninf
: string to use for-Inf
(default:"-Infinity"
)inf
: string to use forInf
(default:"Infinity"
)nan
: string to use forNaN
(default:"NaN"
)jsonlines
: treat thejson
input as an implicit JSON array, delimited by newlines, each element being parsed from each row/line in the inputdicttype
: a customAbstractDict
type to use instead ofJSON.Object{String, Any}
as the default type for JSON object materializationnull
: a custom value to use for JSON null values (default:nothing
)style
: a customStructUtils.StructStyle
subtype instance to be used in calls toStructUtils.make
andStructUtils.lift
. This allows overriding default behaviors for non-owned types.
The methods without a type specified (JSON.parse(json)
, JSON.parsefile(filename)
), do a generic materialization into predefined default types, including:
- JSON object =>
JSON.Object{String, Any}
(see note below) - JSON array =>
Vector{Any}
- JSON string =>
String
- JSON number =>
Int64
,BigInt
,Float64
, orBigFloat
- JSON true =>
true
- JSON false =>
false
- JSON null =>
nothing
When a type T
is specified (JSON.parse(json, T)
, JSON.parsefile(filename, T)
), materialization to a value of type T
will be attempted utilizing machinery and interfaces provided by the StructUtils.jl package, including:
- For JSON objects, JSON keys will be matched against field names of
T
with a value being constructed viaT(args...)
- If
T
was defined with the@noarg
macro, an empty instance will be constructed, and field values set as JSON keys match field names - If
T
had default field values defined using the@defaults
or@kwarg
macros (from StructUtils.jl package), those will be set in the value ofT
unless different values are parsed from the JSON - If
T
was defined with the@nonstruct
macro, the struct will be treated as a primitive type and constructed using thelift
function rather than from field values - JSON keys that don't match field names in
T
will be ignored (skipped over) - If a field in
T
has aname
fieldtag, thename
value will be used to match JSON keys instead - If
T
or any recursive field type ofT
is abstract, an appropriateJSON.@choosetype T x -> ...
definition should exist for "choosing" a concrete type at runtime; default type choosing exists forUnion{T, Missing}
andUnion{T, Nothing}
where the JSON value is checked ifnull
. If theAny
type is encountered, the default materialization types will be used (JSON.Object
,Vector{Any}
, etc.) - For any non-JSON-standard non-aggregate (i.e. non-object, non-array) field type of
T
, aJSON.lift(::Type{T}, x) = ...
definition can be defined for how to "lift" the default JSON value (String, Number, Bool,nothing
) to the typeT
; a default lift definition exists, for example, forJSON.lift(::Type{Missing}, x) = missing
where the standard JSON value fornull
isnothing
and it can be "lifted" tomissing
- For any
T
or recursive field type ofT
that isAbstractDict
, non-string/symbol/integer keys will need to have aStructUtils.liftkey(::Type{T}, x))
definition for how to "lift" the JSON string key to the key type ofT
For any T
or recursive field type of T
that is JSON.JSONText
, the next full raw JSON value will be preserved in the JSONText
wrapper as-is.
For the unique case of nested JSON arrays and prior knowledge of the expected dimensionality, a target type T
can be given as an AbstractArray{T, N}
subtype. In this case, the JSON array data is materialized as an n-dimensional array, where: the number of JSON array nestings must match the Julia array dimensionality (N
), nested JSON arrays at matching depths are assumed to have equal lengths, and the length of the innermost JSON array is the 1st dimension length and so on. For example, the JSON array [[[1.0,2.0]]]
would be materialized as a 3-dimensional array of Float64
with sizes (2, 1, 1)
, when called like JSON.parse("[[[1.0,2.0]]]", Array{Float64, 3})
. Note that n-dimensional Julia arrays are written to json as nested JSON arrays by default, to enable lossless re-parsing, though the dimensionality must still be provided explicitly to the call to parse
(i.e. default parsing via JSON.parse(json)
will result in plain nested Vector{Any}
s returned).
Examples:
using Dates
abstract type AbstractMonster end
struct Dracula <: AbstractMonster
num_victims::Int
end
struct Werewolf <: AbstractMonster
witching_hour::DateTime
end
JSON.@choosetype AbstractMonster x -> x.monster_type[] == "vampire" ? Dracula : Werewolf
struct Percent <: Number
value::Float64
end
JSON.lift(::Type{Percent}, x) = Percent(Float64(x))
StructUtils.liftkey(::Type{Percent}, x::String) = Percent(parse(Float64, x))
@defaults struct FrankenStruct
id::Int = 0
name::String = "Jim"
address::Union{Nothing, String} = nothing
rate::Union{Missing, Float64} = missing
type::Symbol = :a &(json=(name="franken_type",),)
notsure::Any = nothing
monster::AbstractMonster = Dracula(0)
percent::Percent = Percent(0.0)
birthdate::Date = Date(0) &(json=(dateformat="yyyy/mm/dd",),)
percentages::Dict{Percent, Int} = Dict{Percent, Int}()
json_properties::JSONText = JSONText("")
matrix::Matrix{Float64} = Matrix{Float64}(undef, 0, 0)
end
json = """
{
"id": 1,
"address": "123 Main St",
"rate": null,
"franken_type": "b",
"notsure": {"key": "value"},
"monster": {
"monster_type": "vampire",
"num_victims": 10
},
"percent": 0.1,
"birthdate": "2023/10/01",
"percentages": {
"0.1": 1,
"0.2": 2
},
"json_properties": {"key": "value"},
"matrix": [[1.0, 2.0], [3.0, 4.0]],
"extra_key": "extra_value"
}
"""
JSON.parse(json, FrankenStruct)
# FrankenStruct(1, "Jim", "123 Main St", missing, :b, JSON.Object{String, Any}("key" => "value"), Dracula(10), Percent(0.1), Date("2023-10-01"), Dict{Percent, Int64}(Percent(0.2) => 2, Percent(0.1) => 1), JSONText("{"key": "value"}"), [1.0 3.0; 2.0 4.0])
Let's walk through some notable features of the example above:
- The
name
field isn't present in the JSON input, so the default value of"Jim"
is used. - The
address
field uses a default@choosetype
to determine that the JSON value is notnull
, so aString
should be parsed for the field value. - The
rate
field has anull
JSON value, so the default@choosetype
recognizes it should be "lifted" toMissing
, which then uses a predefinedlift
definition forMissing
. - The
type
field is aSymbol
, and has a fieldtagjson=(name="franken_type",)
which means the JSON keyfranken_type
will be used to set the field value instead of the defaulttype
field name. A defaultlift
definition forSymbol
is used to convert the JSON string value to aSymbol
. - The
notsure
field is of typeAny
, so the default object typeJSON.Object{String, Any}
is used to materialize the JSON value. - The
monster
field is a polymorphic type, and the JSON value has amonster_type
key that determines which concrete type to use. The@choosetype
macro is used to define the logic for choosing the concrete type based on the JSON input. Note that tehx
in@choosetype
is aLazyValue
, so we materialize viax.monster_type[]
in order to compare with the string"vampire"
. - The
percent
field is a custom typePercent
and theJSON.lift
defines how to construct aPercent
from the JSON value, which is aFloat64
in this case. - The
birthdate
field uses a custom date format for parsing, specified in the JSON input. - The
percentages
field is a dictionary with keys of typePercent
, which is a custom type. Theliftkey
function is defined to convert the JSON string keys toPercent
types (parses the Float64 manually) - The
json_properties
field has a type ofJSONText
, which means the raw JSON will be preserved as a String of theJSONText
type. - The
matrix
field is aMatrix{Float64}
, so the JSON input array-of-arrays are materialized as such. - The
extra_key
field is not defined in theFrankenStruct
type, so it is ignored and skipped over.
NOTE: Why use JSON.Object{String, Any}
as the default object type? It provides several benefits:
- Behaves as a drop-in replacement for
Dict{String, Any}
, so no loss of functionality - Performance! It's internal representation means memory savings and faster construction for small objects typical in JSON (vs
Dict
) - Insertion order is preserved, so the order of keys in the JSON input is preserved in
JSON.Object
- Convenient
getproperty
(i.e.obj.key
) syntax is supported, even forObject{String,Any}
key types (again ideal/specialized for JSON usage)
JSON.Object
internal representation uses a linked list, thus key lookups are linear time (O(n)). For large JSON objects, (hundreds or thousands of keys), consider using a Dict{String, Any}
instead, like JSON.parse(json; dicttype=Dict{String, Any})
.
JSON.print
— FunctionJSON.json(x) -> String
JSON.json(io, x)
JSON.json(file_name, x)
Serialize x
to JSON format. The 1st method takes just the object and returns a String
. In the 2nd method, io
is an IO
object, and the JSON output will be written to it. For the 3rd method, file_name
is a String
, a file will be opened and the JSON output will be written to it.
All methods accept the following keyword arguments:
omit_null::Union{Bool, Nothing}=nothing
: Controls whether struct fields that are undefined or arenothing
are included in the JSON output. Iftrue
, only non-null fields are written. Iffalse
, all fields are included regardless of being undefined ornothing
. Ifnothing
, the behavior is determined byJSON.omit_null(::Type{T})
, which isfalse
by default.omit_empty::Union{Bool, Nothing}=nothing
: Controls whether struct fields that are empty are included in the JSON output. Iftrue
, empty fields are excluded. Iffalse
, empty fields are included. Ifnothing
, the behavior is determined byJSON.omit_empty(::Type{T})
.allownan::Bool=false
: Iftrue
, allowInf
,-Inf
, andNaN
in the output. Iffalse
, throw an error ifInf
,-Inf
, orNaN
is encountered.jsonlines::Bool=false
: Iftrue
, input must be array-like and the output will be written in the JSON Lines format, where each element of the array is written on a separate line (i.e. separated by a single newline character `
). If
false`, the output will be written in the standard JSON format.
pretty::Union{Integer,Bool}=false
: Controls pretty printing of the JSON output. Iftrue
, the output will be pretty-printed with 2 spaces of indentation. If an integer, it will be used as the number of spaces of indentation. Iffalse
or0
, the output will be compact. Note: Pretty printing is not supported whenjsonlines=true
.inline_limit::Int=0
: For arrays shorter than this limit, pretty printing will be disabled (indentation set to 0).ninf::String="-Infinity"
: Custom string representation for negative infinity.inf::String="Infinity"
: Custom string representation for positive infinity.nan::String="NaN"
: Custom string representation for NaN.float_style::Symbol=:shortest
: Controls how floating-point numbers are formatted. Options are::shortest
: Use the shortest representation that preserves the value:fixed
: Use fixed-point notation:exp
: Use exponential notation
float_precision::Int=1
: Number of decimal places to use whenfloat_style
is:fixed
or:exp
.bufsize::Int=2^22
: Buffer size in bytes for IO operations. When writing to IO, the buffer will be flushed to the IO stream once it reaches this size. This helps control memory usage during large write operations. Default is 4MB (2^22 bytes). This parameter is ignored when returning a String.style::JSONStyle=JSONWriteStyle()
: Custom style object that controls serialization behavior. This allows customizing certain aspects of serialization, like defining a customlower
method for a non-owned type. Likestruct MyStyle <: JSONStyle end
,JSON.lower(x::Rational) = (num=x.num, den=x.den)
, then callingJSON.json(1//3; style=MyStyle())
will output{"num": 1, "den": 3}
.
By default, x
must be a JSON-serializable object. Supported types include:
AbstractString
=> JSON string: types must support theAbstractString
interface, specifically with support forncodeunits
andcodeunit(x, i)
.Bool
=> JSON boolean: must betrue
orfalse
Nothing
=> JSON null: must be thenothing
singleton valueNumber
=> JSON number:Integer
subtypes orUnion{Float16, Float32, Float64}
have default implementations for otherNumber
types,JSON.tostring
is first called to convert the value to aString
before being written directly to JSON outputAbstractArray
/Tuple
/AbstractSet
=> JSON array: objects for whichJSON.arraylike
returnstrue
are output as JSON arrays.arraylike
is defined by default forAbstractArray
,AbstractSet
,Tuple
, andBase.Generator
. For other types that define, they must also properly implementStructUtils.applyeach
to iterate over the index => elements pairs. Note that arrays with dimensionality > 1 are written as nested arrays, withN
nestings forN
dimensions, and the 1st dimension is always the innermost nested JSON array (column-major order).AbstractDict
/NamedTuple
/structs => JSON object: if a value doesn't fall into any of the above categories, it is output as a JSON object.StructUtils.applyeach
is called, which has appropriate implementations forAbstractDict
,NamedTuple
, and structs, where field names => values are iterated over. Field names can be output with an alternative name via field tag overload, likefield::Type &(json=(name="alternative_name",),)
If an object is not JSON-serializable, an override for JSON.lower
can be defined to convert it to a JSON-serializable object. Some default lower
defintions are defined in JSON itself, for example:
StructUtils.lower(::Missing) = nothing
StructUtils.lower(x::Symbol) = String(x)
StructUtils.lower(x::Union{Enum, AbstractChar, VersionNumber, Cstring, Cwstring, UUID, Dates.TimeType}) = string(x)
StructUtils.lower(x::Regex) = x.pattern
These allow common Base/stdlib types to be serialized in an expected format.
Circular references are tracked automatically and cycles are broken by writing null
for any children references.
For pre-formatted JSON data as a String, use JSONText(json)
to write the string out as-is.
For AbstractDict
objects with non-string keys, StructUtils.lowerkey
will be called before serializing. This allows aggregate or other types of dict keys to be converted to an appropriate string representation. See StructUtils.liftkey
for the reverse operation, which is called when parsing JSON data back into a dict type.
NOTE: JSON.json
should not be overloaded directly by custom types as this isn't robust for various output options (IO, String, etc.) nor recursive situations. Types should define an appropriate JSON.lower
definition instead.
NOTE: JSON.json(str, indent::Integer)
is special-cased for backwards compatibility with pre-1.0 JSON.jl, as this typically would mean "write out the indent
integer to file str
". As writing out a single integer to a file is extremely rare, it was decided to keep the pre-1.0 behavior for compatibility reasons.
Examples:
using Dates
abstract type AbstractMonster end
struct Dracula <: AbstractMonster
num_victims::Int
end
struct Werewolf <: AbstractMonster
witching_hour::DateTime
end
struct Percent <: Number
value::Float64
end
JSON.lower(x::Percent) = x.value
StructUtils.lowerkey(x::Percent) = string(x.value)
@noarg mutable struct FrankenStruct
id::Int
name::String # no default to show serialization of an undefined field
address::Union{Nothing, String} = nothing
rate::Union{Missing, Float64} = missing
type::Symbol = :a &(json=(name="franken_type",),)
notsure::Any = JSON.Object("key" => "value")
monster::AbstractMonster = Dracula(10) &(json=(lower=x -> x isa Dracula ? (monster_type="vampire", num_victims=x.num_victims) : (monster_type="werewolf", witching_hour=x.witching_hour),),)
percent::Percent = Percent(0.5)
birthdate::Date = Date(2025, 1, 1) &(json=(dateformat="yyyy/mm/dd",),)
percentages::Dict{Percent, Int} = Dict{Percent, Int}(Percent(0.0) => 0, Percent(1.0) => 1)
json_properties::JSONText = JSONText("{"key": "value"}")
matrix::Matrix{Float64} = [1.0 2.0; 3.0 4.0]
extra_field::Any = nothing &(json=(ignore=true,),)
end
franken = FrankenStruct()
franken.id = 1
json = JSON.json(franken; omit_null=false)
# "{"id":1,"name":null,"address":null,"rate":null,"franken_type":"a","notsure":{"key":"value"},"monster":{"monster_type":"vampire","num_victims":10},"percent":0.5,"birthdate":"2025/01/01","percentages":{"1.0":1,"0.0":0},"json_properties":{"key": "value"},"matrix":[[1.0,3.0],[2.0,4.0]]}"
A few comments on the JSON produced in the example above:
- The
name
field was#undef
, and thus was serialized asnull
. - The
address
andrate
fields werenothing
andmissing
, respectively, and thus were serialized asnull
. - The
type
field has aname
field tag, so the JSON key for this field isfranken_type
instead oftype
. - The
notsure
field is aJSON.Object
, so it is serialized as a JSON object. - The
monster
field is aAbstractMonster
, which is a custom type. It has alower
field tag that specifies how the value of this field specifically (not all AbstractMonster) should be serialized - The
percent
field is aPercent
, which is a custom type. It has alower
method that specifies howPercent
values should be serialized - The
birthdate
field has adateformat
field tag, so the value follows the format (yyyy/mm/dd
) instead of the default date ISO format (yyyy-mm-dd
) - The
percentages
field is aDict{Percent, Int}
, which is a custom type. It has alowerkey
method that specifies howPercent
keys should be serialized as strings - The
json_properties
field is aJSONText
, so the JSONText value is serialized as-is - The
matrix
field is aMatrix{Float64}
, which is a custom type. It is serialized as a JSON array, with the first dimension being the innermost nested JSON array (column-major order) - The
extra_field
field has aignore
field tag, so it is skipped when serializing
JSON.tostring
— MethodJSON.tostring(x)
Overloadable function that allows non-Integer
Number
types to convert themselves to a String
that is then used when serializing x
to JSON. Note that if the result of tostring
is not a valid JSON number, it will be serialized as a JSON string, with double quotes around it.
An example overload would look something like:
JSON.tostring(x::MyDecimal) = string(x)
JSON.@omit_empty
— Macro@omit_empty struct T ...
@omit_empty T
Convenience macro to set omit_empty(::Type{T})
to true
for the struct T
. Can be used in three ways:
- In front of a struct definition:
@omit_empty struct T ... end
- Applied to an existing struct name:
@omit_empty T
- Chained with other macros:
@omit_empty @other_macro struct T ... end
JSON.@omit_null
— Macro@omit_null struct T ...
@omit_null T
Convenience macro to set omit_null(::Type{T})
to true
for the struct T
. Can be used in three ways:
- In front of a struct definition:
@omit_null struct T ... end
- Applied to an existing struct name:
@omit_null T
- Chained with other macros:
@omit_null @defaults struct T ... end
The macro automatically handles complex macro expansions by walking the expression tree to find struct definitions, making it compatible with macros like StructUtils.@defaults
.
Examples
# Method 1: Struct annotation
@omit_null struct Person
name::String
email::Union{Nothing, String}
end
# Method 2: Apply to existing struct
struct User
id::Int
profile::Union{Nothing, String}
end
@omit_null User
# Method 3: Chain with @defaults
@omit_null @defaults struct Employee
name::String = "Anonymous"
manager::Union{Nothing, String} = nothing
end