API Reference
JSON.JSONText — TypeJSON.JSONTextWrapper around a string containing JSON data. Can be used to insert raw JSON in JSON output, like:
json(JSONText("{"key": "value"}"))This will output the JSON as-is, without escaping. Note that no check is done to ensure that the JSON is valid.
Can also be used to read "raw JSON" when parsing, meaning no specialized structure (JSON.Object, Vector{Any}, etc.) is created. Example:
x = JSON.parse("[1,2,3]", JSONText)
# x.value == "[1,2,3]"JSON.LazyValue — TypeJSON.LazyValueA lazy representation of a JSON value. The LazyValue type supports the "selection" syntax for lazily navigating the JSON value. Lazy values can be materialized via JSON.parse(x), JSON.parse(x, T), or JSON.parse!(x, y).
JSON.isvalidjson — FunctionJSON.isvalidjson(json) -> BoolCheck if the given JSON is valid. This function will return true if the JSON is valid, and false otherwise. Inputs can be a string, a vector of bytes, or an IO stream, the same inputs as supported for JSON.lazy and JSON.parse.
JSON.json — FunctionJSON.json(x) -> String
JSON.json(io, x)
JSON.json(file_name, x)Serialize x to JSON format. The 1st method takes just the object and returns a String. In the 2nd method, io is an IO object, and the JSON output will be written to it. For the 3rd method, file_name is a String, a file will be opened and the JSON output will be written to it.
All methods accept the following keyword arguments:
omit_null::Union{Bool, Nothing}=nothing: Controls whether struct fields that are undefined or arenothingare included in the JSON output. Iftrue, only non-null fields are written. Iffalse, all fields are included regardless of being undefined ornothing. Ifnothing, the behavior is determined byJSON.omit_null(::Type{T}), which isfalseby default.omit_empty::Union{Bool, Nothing}=nothing: Controls whether struct fields that are empty are included in the JSON output. Iftrue, empty fields are excluded. Iffalse, empty fields are included. Ifnothing, the behavior is determined byJSON.omit_empty(::Type{T}).allownan::Bool=false: Iftrue, allowInf,-Inf, andNaNin the output. Iffalse, throw an error ifInf,-Inf, orNaNis encountered.jsonlines::Bool=false: Iftrue, input must be array-like and the output will be written in the JSON Lines format, where each element of the array is written on a separate line (i.e. separated by a single newline character `
). Iffalse`, the output will be written in the standard JSON format.
pretty::Union{Integer,Bool}=false: Controls pretty printing of the JSON output. Iftrue, the output will be pretty-printed with 2 spaces of indentation. If an integer, it will be used as the number of spaces of indentation. Iffalseor0, the output will be compact. Note: Pretty printing is not supported whenjsonlines=true.inline_limit::Int=0: For arrays shorter than this limit, pretty printing will be disabled (indentation set to 0).ninf::String="-Infinity": Custom string representation for negative infinity.inf::String="Infinity": Custom string representation for positive infinity.nan::String="NaN": Custom string representation for NaN.float_style::Symbol=:shortest: Controls how floating-point numbers are formatted. Options are::shortest: Use the shortest representation that preserves the value:fixed: Use fixed-point notation:exp: Use exponential notation
float_precision::Int=1: Number of decimal places to use whenfloat_styleis:fixedor:exp.bufsize::Int=2^22: Buffer size in bytes for IO operations. When writing to IO, the buffer will be flushed to the IO stream once it reaches this size. This helps control memory usage during large write operations. Default is 4MB (2^22 bytes). This parameter is ignored when returning a String.style::JSONStyle=JSONWriteStyle(): Custom style object that controls serialization behavior. This allows customizing certain aspects of serialization, like defining a customlowermethod for a non-owned type. Likestruct MyStyle <: JSONStyle end,JSON.lower(x::Rational) = (num=x.num, den=x.den), then callingJSON.json(1//3; style=MyStyle())will output{"num": 1, "den": 3}.
By default, x must be a JSON-serializable object. Supported types include:
AbstractString=> JSON string: types must support theAbstractStringinterface, specifically with support forncodeunitsandcodeunit(x, i).Bool=> JSON boolean: must betrueorfalseNothing=> JSON null: must be thenothingsingleton valueNumber=> JSON number:Integersubtypes orUnion{Float16, Float32, Float64}have default implementations for otherNumbertypes,JSON.tostringis first called to convert the value to aStringbefore being written directly to JSON outputAbstractArray/Tuple/AbstractSet=> JSON array: objects for whichJSON.arraylikereturnstrueare output as JSON arrays.arraylikeis defined by default forAbstractArray,AbstractSet,Tuple, andBase.Generator. For other types that define, they must also properly implementStructUtils.applyeachto iterate over the index => elements pairs. Note that arrays with dimensionality > 1 are written as nested arrays, withNnestings forNdimensions, and the 1st dimension is always the innermost nested JSON array (column-major order).AbstractDict/NamedTuple/structs => JSON object: if a value doesn't fall into any of the above categories, it is output as a JSON object.StructUtils.applyeachis called, which has appropriate implementations forAbstractDict,NamedTuple, and structs, where field names => values are iterated over. Field names can be output with an alternative name via field tag overload, likefield::Type &(json=(name="alternative_name",),)
If an object is not JSON-serializable, an override for JSON.lower can be defined to convert it to a JSON-serializable object. Some default lower defintions are defined in JSON itself, for example:
StructUtils.lower(::Missing) = nothingStructUtils.lower(x::Symbol) = String(x)StructUtils.lower(x::Union{Enum, AbstractChar, VersionNumber, Cstring, Cwstring, UUID, Dates.TimeType}) = string(x)StructUtils.lower(x::Regex) = x.pattern
These allow common Base/stdlib types to be serialized in an expected format.
Circular references are tracked automatically and cycles are broken by writing null for any children references.
For pre-formatted JSON data as a String, use JSONText(json) to write the string out as-is.
For AbstractDict objects with non-string keys, StructUtils.lowerkey will be called before serializing. This allows aggregate or other types of dict keys to be converted to an appropriate string representation. See StructUtils.liftkey for the reverse operation, which is called when parsing JSON data back into a dict type.
NOTE: JSON.json should not be overloaded directly by custom types as this isn't robust for various output options (IO, String, etc.) nor recursive situations. Types should define an appropriate JSON.lower definition instead.
NOTE: JSON.json(str, indent::Integer) is special-cased for backwards compatibility with pre-1.0 JSON.jl, as this typically would mean "write out the indent integer to file str". As writing out a single integer to a file is extremely rare, it was decided to keep the pre-1.0 behavior for compatibility reasons.
Examples:
using Dates
abstract type AbstractMonster end
struct Dracula <: AbstractMonster
num_victims::Int
end
struct Werewolf <: AbstractMonster
witching_hour::DateTime
end
struct Percent <: Number
value::Float64
end
JSON.lower(x::Percent) = x.value
StructUtils.lowerkey(x::Percent) = string(x.value)
@noarg mutable struct FrankenStruct
id::Int
name::String # no default to show serialization of an undefined field
address::Union{Nothing, String} = nothing
rate::Union{Missing, Float64} = missing
type::Symbol = :a &(json=(name="franken_type",),)
notsure::Any = JSON.Object("key" => "value")
monster::AbstractMonster = Dracula(10) &(json=(lower=x -> x isa Dracula ? (monster_type="vampire", num_victims=x.num_victims) : (monster_type="werewolf", witching_hour=x.witching_hour),),)
percent::Percent = Percent(0.5)
birthdate::Date = Date(2025, 1, 1) &(json=(dateformat="yyyy/mm/dd",),)
percentages::Dict{Percent, Int} = Dict{Percent, Int}(Percent(0.0) => 0, Percent(1.0) => 1)
json_properties::JSONText = JSONText("{"key": "value"}")
matrix::Matrix{Float64} = [1.0 2.0; 3.0 4.0]
extra_field::Any = nothing &(json=(ignore=true,),)
end
franken = FrankenStruct()
franken.id = 1
json = JSON.json(franken; omit_null=false)
# "{"id":1,"name":null,"address":null,"rate":null,"franken_type":"a","notsure":{"key":"value"},"monster":{"monster_type":"vampire","num_victims":10},"percent":0.5,"birthdate":"2025/01/01","percentages":{"1.0":1,"0.0":0},"json_properties":{"key": "value"},"matrix":[[1.0,3.0],[2.0,4.0]]}"A few comments on the JSON produced in the example above:
- The
namefield was#undef, and thus was serialized asnull. - The
addressandratefields werenothingandmissing, respectively, and thus were serialized asnull. - The
typefield has anamefield tag, so the JSON key for this field isfranken_typeinstead oftype. - The
notsurefield is aJSON.Object, so it is serialized as a JSON object. - The
monsterfield is aAbstractMonster, which is a custom type. It has alowerfield tag that specifies how the value of this field specifically (not all AbstractMonster) should be serialized - The
percentfield is aPercent, which is a custom type. It has alowermethod that specifies howPercentvalues should be serialized - The
birthdatefield has adateformatfield tag, so the value follows the format (yyyy/mm/dd) instead of the default date ISO format (yyyy-mm-dd) - The
percentagesfield is aDict{Percent, Int}, which is a custom type. It has alowerkeymethod that specifies howPercentkeys should be serialized as strings - The
json_propertiesfield is aJSONText, so the JSONText value is serialized as-is - The
matrixfield is aMatrix{Float64}, which is a custom type. It is serialized as a JSON array, with the first dimension being the innermost nested JSON array (column-major order) - The
extra_fieldfield has aignorefield tag, so it is skipped when serializing
JSON.lazy — FunctionJSON.lazy(json; kw...)
JSON.lazyfile(file; kw...)Detect the initial JSON value in json, returning a JSON.LazyValue instance. json input can be:
AbstractStringAbstractVector{UInt8}IO,IOStream,Cmd(bytes are fully read into aVector{UInt8}for parsing, i.e.read(json)is called)
lazyfile is a convenience method that takes a filename and opens the file before calling lazy.
The JSON.LazyValue supports the "selection" syntax for lazily navigating the JSON value. For example (x = JSON.lazy(json)):
x.key,x[:key]orx["key"]for JSON objectsx[1],x[2:3],x[end]for JSON arrayspropertynames(x)to see all keys in the JSON objectx.a.b.cfor selecting deeply nested valuesx[~, (k, v) -> k == "foo"]for recursively searching for key "foo" and return matching values
NOTE: Selecting values from a LazyValue will always return a LazyValue. Selecting a specific key of an object or index of an array will only parse what is necessary before returning. This leads to a few conclusions about how to effectively utilize LazyValue:
JSON.lazyis great for one-time access of a value in JSON- It's also great for finding a required deeply nested value
- It's not great for any case where repeated access to values is required; this results in the same JSON being parsed on each access (i.e. naively iterating a lazy JSON array will be O(n^2))
- Best practice is to use
JSON.lazysparingly unless there's a specific case where it will benefit; or useJSON.lazyas a means to access a value that is then fully materialized
Another option for processing JSON.LazyValue is calling foreach(f, x) which is defined on JSON.LazyValue for JSON objects and arrays. For objects, f should be of the form f(kv::Pair{String, LazyValue}) where kv is a key-value pair, and for arrays, f(v::LazyValue) where v is the value at the index. This allows for iterating over all key-value pairs in an object or all values in an array without materializing the entire structure.
Lazy values can be materialized via JSON.parse in a few different forms:
JSON.parse(json): Default materialization intoJSON.Object(a Dict-like type),Vector{Any}, etc.JSON.parse(json, T): Materialize into a user-provided typeT(following rules/programmatic construction from StructUtils.jl)JSON.parse!(json, x): Materialize into an existing objectx(following rules/programmatic construction from StructUtils.jl)
Thus for completeness sake, here's an example of ideal usage of JSON.lazy:
x = JSON.lazy(very_large_json_object)
# find a deeply nested value
y = x.a.b.c.d.e.f.g
# materialize the value
z = JSON.parse(y)
# now mutate/repeatedly access values in zIn this example, we only parsed as much of the very_large_json_object as was required to find the value y. Then we fully materialized y into z, which is now a normal Julia object. We can now mutate or access values in z.
Currently supported keyword arguments include:
allownan::Bool = false: whether "special" float values shoudl be allowed while parsing (NaN,Inf,-Inf); these values are specifically not allowed in the JSON spec, but many JSON libraries allow reading/writingninf::String = "-Infinity": the string that will be used to parse-Infifallownan=trueinf::String = "Infinity": the string that will be used to parseInfifallownan=truenan::String = "NaN": the string that will be sued to parseNaNifallownan=truejsonlines::Bool = false: whether the JSON input should be treated as an implicit array, with newlines separating individual JSON elements with no leading'['or trailing']'characters. Common in logging or streaming workflows. Defaults totruewhen used withJSON.parsefileand the filename extension is.jsonlorndjson. Note this ensures that parsing will always return an array at the root-level.
Note that validation is only fully done on null, true, and false, while other values are only lazily inferred from the first non-whitespace character:
'{': JSON object'[': JSON array'"': JSON string'0'-'9'or'-': JSON number
Further validation for these values is done later when materialized, like JSON.parse, or via selection syntax calls on a LazyValue.
JSON.lazyfile — FunctionJSON.lazy(json; kw...)
JSON.lazyfile(file; kw...)Detect the initial JSON value in json, returning a JSON.LazyValue instance. json input can be:
AbstractStringAbstractVector{UInt8}IO,IOStream,Cmd(bytes are fully read into aVector{UInt8}for parsing, i.e.read(json)is called)
lazyfile is a convenience method that takes a filename and opens the file before calling lazy.
The JSON.LazyValue supports the "selection" syntax for lazily navigating the JSON value. For example (x = JSON.lazy(json)):
x.key,x[:key]orx["key"]for JSON objectsx[1],x[2:3],x[end]for JSON arrayspropertynames(x)to see all keys in the JSON objectx.a.b.cfor selecting deeply nested valuesx[~, (k, v) -> k == "foo"]for recursively searching for key "foo" and return matching values
NOTE: Selecting values from a LazyValue will always return a LazyValue. Selecting a specific key of an object or index of an array will only parse what is necessary before returning. This leads to a few conclusions about how to effectively utilize LazyValue:
JSON.lazyis great for one-time access of a value in JSON- It's also great for finding a required deeply nested value
- It's not great for any case where repeated access to values is required; this results in the same JSON being parsed on each access (i.e. naively iterating a lazy JSON array will be O(n^2))
- Best practice is to use
JSON.lazysparingly unless there's a specific case where it will benefit; or useJSON.lazyas a means to access a value that is then fully materialized
Another option for processing JSON.LazyValue is calling foreach(f, x) which is defined on JSON.LazyValue for JSON objects and arrays. For objects, f should be of the form f(kv::Pair{String, LazyValue}) where kv is a key-value pair, and for arrays, f(v::LazyValue) where v is the value at the index. This allows for iterating over all key-value pairs in an object or all values in an array without materializing the entire structure.
Lazy values can be materialized via JSON.parse in a few different forms:
JSON.parse(json): Default materialization intoJSON.Object(a Dict-like type),Vector{Any}, etc.JSON.parse(json, T): Materialize into a user-provided typeT(following rules/programmatic construction from StructUtils.jl)JSON.parse!(json, x): Materialize into an existing objectx(following rules/programmatic construction from StructUtils.jl)
Thus for completeness sake, here's an example of ideal usage of JSON.lazy:
x = JSON.lazy(very_large_json_object)
# find a deeply nested value
y = x.a.b.c.d.e.f.g
# materialize the value
z = JSON.parse(y)
# now mutate/repeatedly access values in zIn this example, we only parsed as much of the very_large_json_object as was required to find the value y. Then we fully materialized y into z, which is now a normal Julia object. We can now mutate or access values in z.
Currently supported keyword arguments include:
allownan::Bool = false: whether "special" float values shoudl be allowed while parsing (NaN,Inf,-Inf); these values are specifically not allowed in the JSON spec, but many JSON libraries allow reading/writingninf::String = "-Infinity": the string that will be used to parse-Infifallownan=trueinf::String = "Infinity": the string that will be used to parseInfifallownan=truenan::String = "NaN": the string that will be sued to parseNaNifallownan=truejsonlines::Bool = false: whether the JSON input should be treated as an implicit array, with newlines separating individual JSON elements with no leading'['or trailing']'characters. Common in logging or streaming workflows. Defaults totruewhen used withJSON.parsefileand the filename extension is.jsonlorndjson. Note this ensures that parsing will always return an array at the root-level.
Note that validation is only fully done on null, true, and false, while other values are only lazily inferred from the first non-whitespace character:
'{': JSON object'[': JSON array'"': JSON string'0'-'9'or'-': JSON number
Further validation for these values is done later when materialized, like JSON.parse, or via selection syntax calls on a LazyValue.
JSON.omit_empty — MethodJSON.omit_empty(::Type{T})::Bool
JSON.omit_empty(::JSONStyle, ::Type{T})::BoolControls whether struct fields that are empty are included in the JSON output. Returns false by default, meaning empty fields are included. To instead exclude empty fields, set this to true. A field is considered empty if it is nothing, an empty collection (empty array, dict, string, tuple, or named tuple), or missing. This can also be controlled via the omit_empty keyword argument in JSON.json.
# Override for a specific type
JSON.omit_empty(::Type{MyStruct}) = true
# Override for a custom style
struct MyStyle <: JSON.JSONStyle end
JSON.omit_empty(::MyStyle, ::Type{T}) where {T} = trueJSON.omit_null — MethodJSON.omit_null(::Type{T})::Bool
JSON.omit_null(::JSONStyle, ::Type{T})::BoolControls whether struct fields that are undefined or are nothing are included in the JSON output. Returns false by default, meaning all fields are included, regardless of undef or nothing. To instead ensure only non-null fields are written, set this to true. This can also be controlled via the omit_null keyword argument in JSON.json.
# Override for a specific type
JSON.omit_null(::Type{MyStruct}) = true
# Override for a custom style
struct MyStyle <: JSON.JSONStyle end
JSON.omit_null(::MyStyle, ::Type{T}) where {T} = trueJSON.parse — FunctionJSON.parse(json)
JSON.parse(json, T)
JSON.parse!(json, x)
JSON.parsefile(filename)
JSON.parsefile(filename, T)
JSON.parsefile!(filename, x)Parse a JSON input (string, vector, stream, LazyValue, etc.) into a Julia value. The parsefile variants take a filename, open the file, and pass the IOStream to parse.
Currently supported keyword arguments include:
allownan: allows parsingNaN,Inf, and-Infsince they are otherwise invalid JSONninf: string to use for-Inf(default:"-Infinity")inf: string to use forInf(default:"Infinity")nan: string to use forNaN(default:"NaN")jsonlines: treat thejsoninput as an implicit JSON array, delimited by newlines, each element being parsed from each row/line in the inputdicttype: a customAbstractDicttype to use instead ofJSON.Object{String, Any}as the default type for JSON object materializationnull: a custom value to use for JSON null values (default:nothing)style: a customStructUtils.StructStylesubtype instance to be used in calls toStructUtils.makeandStructUtils.lift. This allows overriding default behaviors for non-owned types.
The methods without a type specified (JSON.parse(json), JSON.parsefile(filename)), do a generic materialization into predefined default types, including:
- JSON object =>
JSON.Object{String, Any}(see note below) - JSON array =>
Vector{Any} - JSON string =>
String - JSON number =>
Int64,BigInt,Float64, orBigFloat - JSON true =>
true - JSON false =>
false - JSON null =>
nothing
When a type T is specified (JSON.parse(json, T), JSON.parsefile(filename, T)), materialization to a value of type T will be attempted utilizing machinery and interfaces provided by the StructUtils.jl package, including:
- For JSON objects, JSON keys will be matched against field names of
Twith a value being constructed viaT(args...) - If
Twas defined with the@noargmacro, an empty instance will be constructed, and field values set as JSON keys match field names - If
Thad default field values defined using the@defaultsor@kwargmacros (from StructUtils.jl package), those will be set in the value ofTunless different values are parsed from the JSON - If
Twas defined with the@nonstructmacro, the struct will be treated as a primitive type and constructed using theliftfunction rather than from field values - JSON keys that don't match field names in
Twill be ignored (skipped over) - If a field in
Thas anamefieldtag, thenamevalue will be used to match JSON keys instead - If
Tor any recursive field type ofTis abstract, an appropriateJSON.@choosetype T x -> ...definition should exist for "choosing" a concrete type at runtime; default type choosing exists forUnion{T, Missing}andUnion{T, Nothing}where the JSON value is checked ifnull. If theAnytype is encountered, the default materialization types will be used (JSON.Object,Vector{Any}, etc.) - For any non-JSON-standard non-aggregate (i.e. non-object, non-array) field type of
T, aJSON.lift(::Type{T}, x) = ...definition can be defined for how to "lift" the default JSON value (String, Number, Bool,nothing) to the typeT; a default lift definition exists, for example, forJSON.lift(::Type{Missing}, x) = missingwhere the standard JSON value fornullisnothingand it can be "lifted" tomissing - For any
Tor recursive field type ofTthat isAbstractDict, non-string/symbol/integer keys will need to have aStructUtils.liftkey(::Type{T}, x))definition for how to "lift" the JSON string key to the key type ofT
For any T or recursive field type of T that is JSON.JSONText, the next full raw JSON value will be preserved in the JSONText wrapper as-is.
For the unique case of nested JSON arrays and prior knowledge of the expected dimensionality, a target type T can be given as an AbstractArray{T, N} subtype. In this case, the JSON array data is materialized as an n-dimensional array, where: the number of JSON array nestings must match the Julia array dimensionality (N), nested JSON arrays at matching depths are assumed to have equal lengths, and the length of the innermost JSON array is the 1st dimension length and so on. For example, the JSON array [[[1.0,2.0]]] would be materialized as a 3-dimensional array of Float64 with sizes (2, 1, 1), when called like JSON.parse("[[[1.0,2.0]]]", Array{Float64, 3}). Note that n-dimensional Julia arrays are written to json as nested JSON arrays by default, to enable lossless re-parsing, though the dimensionality must still be provided explicitly to the call to parse (i.e. default parsing via JSON.parse(json) will result in plain nested Vector{Any}s returned).
Examples:
using Dates
abstract type AbstractMonster end
struct Dracula <: AbstractMonster
num_victims::Int
end
struct Werewolf <: AbstractMonster
witching_hour::DateTime
end
JSON.@choosetype AbstractMonster x -> x.monster_type[] == "vampire" ? Dracula : Werewolf
struct Percent <: Number
value::Float64
end
JSON.lift(::Type{Percent}, x) = Percent(Float64(x))
StructUtils.liftkey(::Type{Percent}, x::String) = Percent(parse(Float64, x))
@defaults struct FrankenStruct
id::Int = 0
name::String = "Jim"
address::Union{Nothing, String} = nothing
rate::Union{Missing, Float64} = missing
type::Symbol = :a &(json=(name="franken_type",),)
notsure::Any = nothing
monster::AbstractMonster = Dracula(0)
percent::Percent = Percent(0.0)
birthdate::Date = Date(0) &(json=(dateformat="yyyy/mm/dd",),)
percentages::Dict{Percent, Int} = Dict{Percent, Int}()
json_properties::JSONText = JSONText("")
matrix::Matrix{Float64} = Matrix{Float64}(undef, 0, 0)
end
json = """
{
"id": 1,
"address": "123 Main St",
"rate": null,
"franken_type": "b",
"notsure": {"key": "value"},
"monster": {
"monster_type": "vampire",
"num_victims": 10
},
"percent": 0.1,
"birthdate": "2023/10/01",
"percentages": {
"0.1": 1,
"0.2": 2
},
"json_properties": {"key": "value"},
"matrix": [[1.0, 2.0], [3.0, 4.0]],
"extra_key": "extra_value"
}
"""
JSON.parse(json, FrankenStruct)
# FrankenStruct(1, "Jim", "123 Main St", missing, :b, JSON.Object{String, Any}("key" => "value"), Dracula(10), Percent(0.1), Date("2023-10-01"), Dict{Percent, Int64}(Percent(0.2) => 2, Percent(0.1) => 1), JSONText("{"key": "value"}"), [1.0 3.0; 2.0 4.0])Let's walk through some notable features of the example above:
- The
namefield isn't present in the JSON input, so the default value of"Jim"is used. - The
addressfield uses a default@choosetypeto determine that the JSON value is notnull, so aStringshould be parsed for the field value. - The
ratefield has anullJSON value, so the default@choosetyperecognizes it should be "lifted" toMissing, which then uses a predefinedliftdefinition forMissing. - The
typefield is aSymbol, and has a fieldtagjson=(name="franken_type",)which means the JSON keyfranken_typewill be used to set the field value instead of the defaulttypefield name. A defaultliftdefinition forSymbolis used to convert the JSON string value to aSymbol. - The
notsurefield is of typeAny, so the default object typeJSON.Object{String, Any}is used to materialize the JSON value. - The
monsterfield is a polymorphic type, and the JSON value has amonster_typekey that determines which concrete type to use. The@choosetypemacro is used to define the logic for choosing the concrete type based on the JSON input. Note that tehxin@choosetypeis aLazyValue, so we materialize viax.monster_type[]in order to compare with the string"vampire". - The
percentfield is a custom typePercentand theJSON.liftdefines how to construct aPercentfrom the JSON value, which is aFloat64in this case. - The
birthdatefield uses a custom date format for parsing, specified in the JSON input. - The
percentagesfield is a dictionary with keys of typePercent, which is a custom type. Theliftkeyfunction is defined to convert the JSON string keys toPercenttypes (parses the Float64 manually) - The
json_propertiesfield has a type ofJSONText, which means the raw JSON will be preserved as a String of theJSONTexttype. - The
matrixfield is aMatrix{Float64}, so the JSON input array-of-arrays are materialized as such. - The
extra_keyfield is not defined in theFrankenStructtype, so it is ignored and skipped over.
NOTE: Why use JSON.Object{String, Any} as the default object type? It provides several benefits:
- Behaves as a drop-in replacement for
Dict{String, Any}, so no loss of functionality - Performance! It's internal representation means memory savings and faster construction for small objects typical in JSON (vs
Dict) - Insertion order is preserved, so the order of keys in the JSON input is preserved in
JSON.Object - Convenient
getproperty(i.e.obj.key) syntax is supported, even forObject{String,Any}key types (again ideal/specialized for JSON usage)
JSON.Object internal representation uses a linked list, thus key lookups are linear time (O(n)). For large JSON objects, (hundreds or thousands of keys), consider using a Dict{String, Any} instead, like JSON.parse(json; dicttype=Dict{String, Any}).
JSON.parsefile — FunctionJSON.parse(json)
JSON.parse(json, T)
JSON.parse!(json, x)
JSON.parsefile(filename)
JSON.parsefile(filename, T)
JSON.parsefile!(filename, x)Parse a JSON input (string, vector, stream, LazyValue, etc.) into a Julia value. The parsefile variants take a filename, open the file, and pass the IOStream to parse.
Currently supported keyword arguments include:
allownan: allows parsingNaN,Inf, and-Infsince they are otherwise invalid JSONninf: string to use for-Inf(default:"-Infinity")inf: string to use forInf(default:"Infinity")nan: string to use forNaN(default:"NaN")jsonlines: treat thejsoninput as an implicit JSON array, delimited by newlines, each element being parsed from each row/line in the inputdicttype: a customAbstractDicttype to use instead ofJSON.Object{String, Any}as the default type for JSON object materializationnull: a custom value to use for JSON null values (default:nothing)style: a customStructUtils.StructStylesubtype instance to be used in calls toStructUtils.makeandStructUtils.lift. This allows overriding default behaviors for non-owned types.
The methods without a type specified (JSON.parse(json), JSON.parsefile(filename)), do a generic materialization into predefined default types, including:
- JSON object =>
JSON.Object{String, Any}(see note below) - JSON array =>
Vector{Any} - JSON string =>
String - JSON number =>
Int64,BigInt,Float64, orBigFloat - JSON true =>
true - JSON false =>
false - JSON null =>
nothing
When a type T is specified (JSON.parse(json, T), JSON.parsefile(filename, T)), materialization to a value of type T will be attempted utilizing machinery and interfaces provided by the StructUtils.jl package, including:
- For JSON objects, JSON keys will be matched against field names of
Twith a value being constructed viaT(args...) - If
Twas defined with the@noargmacro, an empty instance will be constructed, and field values set as JSON keys match field names - If
Thad default field values defined using the@defaultsor@kwargmacros (from StructUtils.jl package), those will be set in the value ofTunless different values are parsed from the JSON - If
Twas defined with the@nonstructmacro, the struct will be treated as a primitive type and constructed using theliftfunction rather than from field values - JSON keys that don't match field names in
Twill be ignored (skipped over) - If a field in
Thas anamefieldtag, thenamevalue will be used to match JSON keys instead - If
Tor any recursive field type ofTis abstract, an appropriateJSON.@choosetype T x -> ...definition should exist for "choosing" a concrete type at runtime; default type choosing exists forUnion{T, Missing}andUnion{T, Nothing}where the JSON value is checked ifnull. If theAnytype is encountered, the default materialization types will be used (JSON.Object,Vector{Any}, etc.) - For any non-JSON-standard non-aggregate (i.e. non-object, non-array) field type of
T, aJSON.lift(::Type{T}, x) = ...definition can be defined for how to "lift" the default JSON value (String, Number, Bool,nothing) to the typeT; a default lift definition exists, for example, forJSON.lift(::Type{Missing}, x) = missingwhere the standard JSON value fornullisnothingand it can be "lifted" tomissing - For any
Tor recursive field type ofTthat isAbstractDict, non-string/symbol/integer keys will need to have aStructUtils.liftkey(::Type{T}, x))definition for how to "lift" the JSON string key to the key type ofT
For any T or recursive field type of T that is JSON.JSONText, the next full raw JSON value will be preserved in the JSONText wrapper as-is.
For the unique case of nested JSON arrays and prior knowledge of the expected dimensionality, a target type T can be given as an AbstractArray{T, N} subtype. In this case, the JSON array data is materialized as an n-dimensional array, where: the number of JSON array nestings must match the Julia array dimensionality (N), nested JSON arrays at matching depths are assumed to have equal lengths, and the length of the innermost JSON array is the 1st dimension length and so on. For example, the JSON array [[[1.0,2.0]]] would be materialized as a 3-dimensional array of Float64 with sizes (2, 1, 1), when called like JSON.parse("[[[1.0,2.0]]]", Array{Float64, 3}). Note that n-dimensional Julia arrays are written to json as nested JSON arrays by default, to enable lossless re-parsing, though the dimensionality must still be provided explicitly to the call to parse (i.e. default parsing via JSON.parse(json) will result in plain nested Vector{Any}s returned).
Examples:
using Dates
abstract type AbstractMonster end
struct Dracula <: AbstractMonster
num_victims::Int
end
struct Werewolf <: AbstractMonster
witching_hour::DateTime
end
JSON.@choosetype AbstractMonster x -> x.monster_type[] == "vampire" ? Dracula : Werewolf
struct Percent <: Number
value::Float64
end
JSON.lift(::Type{Percent}, x) = Percent(Float64(x))
StructUtils.liftkey(::Type{Percent}, x::String) = Percent(parse(Float64, x))
@defaults struct FrankenStruct
id::Int = 0
name::String = "Jim"
address::Union{Nothing, String} = nothing
rate::Union{Missing, Float64} = missing
type::Symbol = :a &(json=(name="franken_type",),)
notsure::Any = nothing
monster::AbstractMonster = Dracula(0)
percent::Percent = Percent(0.0)
birthdate::Date = Date(0) &(json=(dateformat="yyyy/mm/dd",),)
percentages::Dict{Percent, Int} = Dict{Percent, Int}()
json_properties::JSONText = JSONText("")
matrix::Matrix{Float64} = Matrix{Float64}(undef, 0, 0)
end
json = """
{
"id": 1,
"address": "123 Main St",
"rate": null,
"franken_type": "b",
"notsure": {"key": "value"},
"monster": {
"monster_type": "vampire",
"num_victims": 10
},
"percent": 0.1,
"birthdate": "2023/10/01",
"percentages": {
"0.1": 1,
"0.2": 2
},
"json_properties": {"key": "value"},
"matrix": [[1.0, 2.0], [3.0, 4.0]],
"extra_key": "extra_value"
}
"""
JSON.parse(json, FrankenStruct)
# FrankenStruct(1, "Jim", "123 Main St", missing, :b, JSON.Object{String, Any}("key" => "value"), Dracula(10), Percent(0.1), Date("2023-10-01"), Dict{Percent, Int64}(Percent(0.2) => 2, Percent(0.1) => 1), JSONText("{"key": "value"}"), [1.0 3.0; 2.0 4.0])Let's walk through some notable features of the example above:
- The
namefield isn't present in the JSON input, so the default value of"Jim"is used. - The
addressfield uses a default@choosetypeto determine that the JSON value is notnull, so aStringshould be parsed for the field value. - The
ratefield has anullJSON value, so the default@choosetyperecognizes it should be "lifted" toMissing, which then uses a predefinedliftdefinition forMissing. - The
typefield is aSymbol, and has a fieldtagjson=(name="franken_type",)which means the JSON keyfranken_typewill be used to set the field value instead of the defaulttypefield name. A defaultliftdefinition forSymbolis used to convert the JSON string value to aSymbol. - The
notsurefield is of typeAny, so the default object typeJSON.Object{String, Any}is used to materialize the JSON value. - The
monsterfield is a polymorphic type, and the JSON value has amonster_typekey that determines which concrete type to use. The@choosetypemacro is used to define the logic for choosing the concrete type based on the JSON input. Note that tehxin@choosetypeis aLazyValue, so we materialize viax.monster_type[]in order to compare with the string"vampire". - The
percentfield is a custom typePercentand theJSON.liftdefines how to construct aPercentfrom the JSON value, which is aFloat64in this case. - The
birthdatefield uses a custom date format for parsing, specified in the JSON input. - The
percentagesfield is a dictionary with keys of typePercent, which is a custom type. Theliftkeyfunction is defined to convert the JSON string keys toPercenttypes (parses the Float64 manually) - The
json_propertiesfield has a type ofJSONText, which means the raw JSON will be preserved as a String of theJSONTexttype. - The
matrixfield is aMatrix{Float64}, so the JSON input array-of-arrays are materialized as such. - The
extra_keyfield is not defined in theFrankenStructtype, so it is ignored and skipped over.
NOTE: Why use JSON.Object{String, Any} as the default object type? It provides several benefits:
- Behaves as a drop-in replacement for
Dict{String, Any}, so no loss of functionality - Performance! It's internal representation means memory savings and faster construction for small objects typical in JSON (vs
Dict) - Insertion order is preserved, so the order of keys in the JSON input is preserved in
JSON.Object - Convenient
getproperty(i.e.obj.key) syntax is supported, even forObject{String,Any}key types (again ideal/specialized for JSON usage)
JSON.Object internal representation uses a linked list, thus key lookups are linear time (O(n)). For large JSON objects, (hundreds or thousands of keys), consider using a Dict{String, Any} instead, like JSON.parse(json; dicttype=Dict{String, Any}).
JSON.parsefile! — FunctionJSON.parse(json)
JSON.parse(json, T)
JSON.parse!(json, x)
JSON.parsefile(filename)
JSON.parsefile(filename, T)
JSON.parsefile!(filename, x)Parse a JSON input (string, vector, stream, LazyValue, etc.) into a Julia value. The parsefile variants take a filename, open the file, and pass the IOStream to parse.
Currently supported keyword arguments include:
allownan: allows parsingNaN,Inf, and-Infsince they are otherwise invalid JSONninf: string to use for-Inf(default:"-Infinity")inf: string to use forInf(default:"Infinity")nan: string to use forNaN(default:"NaN")jsonlines: treat thejsoninput as an implicit JSON array, delimited by newlines, each element being parsed from each row/line in the inputdicttype: a customAbstractDicttype to use instead ofJSON.Object{String, Any}as the default type for JSON object materializationnull: a custom value to use for JSON null values (default:nothing)style: a customStructUtils.StructStylesubtype instance to be used in calls toStructUtils.makeandStructUtils.lift. This allows overriding default behaviors for non-owned types.
The methods without a type specified (JSON.parse(json), JSON.parsefile(filename)), do a generic materialization into predefined default types, including:
- JSON object =>
JSON.Object{String, Any}(see note below) - JSON array =>
Vector{Any} - JSON string =>
String - JSON number =>
Int64,BigInt,Float64, orBigFloat - JSON true =>
true - JSON false =>
false - JSON null =>
nothing
When a type T is specified (JSON.parse(json, T), JSON.parsefile(filename, T)), materialization to a value of type T will be attempted utilizing machinery and interfaces provided by the StructUtils.jl package, including:
- For JSON objects, JSON keys will be matched against field names of
Twith a value being constructed viaT(args...) - If
Twas defined with the@noargmacro, an empty instance will be constructed, and field values set as JSON keys match field names - If
Thad default field values defined using the@defaultsor@kwargmacros (from StructUtils.jl package), those will be set in the value ofTunless different values are parsed from the JSON - If
Twas defined with the@nonstructmacro, the struct will be treated as a primitive type and constructed using theliftfunction rather than from field values - JSON keys that don't match field names in
Twill be ignored (skipped over) - If a field in
Thas anamefieldtag, thenamevalue will be used to match JSON keys instead - If
Tor any recursive field type ofTis abstract, an appropriateJSON.@choosetype T x -> ...definition should exist for "choosing" a concrete type at runtime; default type choosing exists forUnion{T, Missing}andUnion{T, Nothing}where the JSON value is checked ifnull. If theAnytype is encountered, the default materialization types will be used (JSON.Object,Vector{Any}, etc.) - For any non-JSON-standard non-aggregate (i.e. non-object, non-array) field type of
T, aJSON.lift(::Type{T}, x) = ...definition can be defined for how to "lift" the default JSON value (String, Number, Bool,nothing) to the typeT; a default lift definition exists, for example, forJSON.lift(::Type{Missing}, x) = missingwhere the standard JSON value fornullisnothingand it can be "lifted" tomissing - For any
Tor recursive field type ofTthat isAbstractDict, non-string/symbol/integer keys will need to have aStructUtils.liftkey(::Type{T}, x))definition for how to "lift" the JSON string key to the key type ofT
For any T or recursive field type of T that is JSON.JSONText, the next full raw JSON value will be preserved in the JSONText wrapper as-is.
For the unique case of nested JSON arrays and prior knowledge of the expected dimensionality, a target type T can be given as an AbstractArray{T, N} subtype. In this case, the JSON array data is materialized as an n-dimensional array, where: the number of JSON array nestings must match the Julia array dimensionality (N), nested JSON arrays at matching depths are assumed to have equal lengths, and the length of the innermost JSON array is the 1st dimension length and so on. For example, the JSON array [[[1.0,2.0]]] would be materialized as a 3-dimensional array of Float64 with sizes (2, 1, 1), when called like JSON.parse("[[[1.0,2.0]]]", Array{Float64, 3}). Note that n-dimensional Julia arrays are written to json as nested JSON arrays by default, to enable lossless re-parsing, though the dimensionality must still be provided explicitly to the call to parse (i.e. default parsing via JSON.parse(json) will result in plain nested Vector{Any}s returned).
Examples:
using Dates
abstract type AbstractMonster end
struct Dracula <: AbstractMonster
num_victims::Int
end
struct Werewolf <: AbstractMonster
witching_hour::DateTime
end
JSON.@choosetype AbstractMonster x -> x.monster_type[] == "vampire" ? Dracula : Werewolf
struct Percent <: Number
value::Float64
end
JSON.lift(::Type{Percent}, x) = Percent(Float64(x))
StructUtils.liftkey(::Type{Percent}, x::String) = Percent(parse(Float64, x))
@defaults struct FrankenStruct
id::Int = 0
name::String = "Jim"
address::Union{Nothing, String} = nothing
rate::Union{Missing, Float64} = missing
type::Symbol = :a &(json=(name="franken_type",),)
notsure::Any = nothing
monster::AbstractMonster = Dracula(0)
percent::Percent = Percent(0.0)
birthdate::Date = Date(0) &(json=(dateformat="yyyy/mm/dd",),)
percentages::Dict{Percent, Int} = Dict{Percent, Int}()
json_properties::JSONText = JSONText("")
matrix::Matrix{Float64} = Matrix{Float64}(undef, 0, 0)
end
json = """
{
"id": 1,
"address": "123 Main St",
"rate": null,
"franken_type": "b",
"notsure": {"key": "value"},
"monster": {
"monster_type": "vampire",
"num_victims": 10
},
"percent": 0.1,
"birthdate": "2023/10/01",
"percentages": {
"0.1": 1,
"0.2": 2
},
"json_properties": {"key": "value"},
"matrix": [[1.0, 2.0], [3.0, 4.0]],
"extra_key": "extra_value"
}
"""
JSON.parse(json, FrankenStruct)
# FrankenStruct(1, "Jim", "123 Main St", missing, :b, JSON.Object{String, Any}("key" => "value"), Dracula(10), Percent(0.1), Date("2023-10-01"), Dict{Percent, Int64}(Percent(0.2) => 2, Percent(0.1) => 1), JSONText("{"key": "value"}"), [1.0 3.0; 2.0 4.0])Let's walk through some notable features of the example above:
- The
namefield isn't present in the JSON input, so the default value of"Jim"is used. - The
addressfield uses a default@choosetypeto determine that the JSON value is notnull, so aStringshould be parsed for the field value. - The
ratefield has anullJSON value, so the default@choosetyperecognizes it should be "lifted" toMissing, which then uses a predefinedliftdefinition forMissing. - The
typefield is aSymbol, and has a fieldtagjson=(name="franken_type",)which means the JSON keyfranken_typewill be used to set the field value instead of the defaulttypefield name. A defaultliftdefinition forSymbolis used to convert the JSON string value to aSymbol. - The
notsurefield is of typeAny, so the default object typeJSON.Object{String, Any}is used to materialize the JSON value. - The
monsterfield is a polymorphic type, and the JSON value has amonster_typekey that determines which concrete type to use. The@choosetypemacro is used to define the logic for choosing the concrete type based on the JSON input. Note that tehxin@choosetypeis aLazyValue, so we materialize viax.monster_type[]in order to compare with the string"vampire". - The
percentfield is a custom typePercentand theJSON.liftdefines how to construct aPercentfrom the JSON value, which is aFloat64in this case. - The
birthdatefield uses a custom date format for parsing, specified in the JSON input. - The
percentagesfield is a dictionary with keys of typePercent, which is a custom type. Theliftkeyfunction is defined to convert the JSON string keys toPercenttypes (parses the Float64 manually) - The
json_propertiesfield has a type ofJSONText, which means the raw JSON will be preserved as a String of theJSONTexttype. - The
matrixfield is aMatrix{Float64}, so the JSON input array-of-arrays are materialized as such. - The
extra_keyfield is not defined in theFrankenStructtype, so it is ignored and skipped over.
NOTE: Why use JSON.Object{String, Any} as the default object type? It provides several benefits:
- Behaves as a drop-in replacement for
Dict{String, Any}, so no loss of functionality - Performance! It's internal representation means memory savings and faster construction for small objects typical in JSON (vs
Dict) - Insertion order is preserved, so the order of keys in the JSON input is preserved in
JSON.Object - Convenient
getproperty(i.e.obj.key) syntax is supported, even forObject{String,Any}key types (again ideal/specialized for JSON usage)
JSON.Object internal representation uses a linked list, thus key lookups are linear time (O(n)). For large JSON objects, (hundreds or thousands of keys), consider using a Dict{String, Any} instead, like JSON.parse(json; dicttype=Dict{String, Any}).
JSON.print — FunctionJSON.json(x) -> String
JSON.json(io, x)
JSON.json(file_name, x)Serialize x to JSON format. The 1st method takes just the object and returns a String. In the 2nd method, io is an IO object, and the JSON output will be written to it. For the 3rd method, file_name is a String, a file will be opened and the JSON output will be written to it.
All methods accept the following keyword arguments:
omit_null::Union{Bool, Nothing}=nothing: Controls whether struct fields that are undefined or arenothingare included in the JSON output. Iftrue, only non-null fields are written. Iffalse, all fields are included regardless of being undefined ornothing. Ifnothing, the behavior is determined byJSON.omit_null(::Type{T}), which isfalseby default.omit_empty::Union{Bool, Nothing}=nothing: Controls whether struct fields that are empty are included in the JSON output. Iftrue, empty fields are excluded. Iffalse, empty fields are included. Ifnothing, the behavior is determined byJSON.omit_empty(::Type{T}).allownan::Bool=false: Iftrue, allowInf,-Inf, andNaNin the output. Iffalse, throw an error ifInf,-Inf, orNaNis encountered.jsonlines::Bool=false: Iftrue, input must be array-like and the output will be written in the JSON Lines format, where each element of the array is written on a separate line (i.e. separated by a single newline character `
). Iffalse`, the output will be written in the standard JSON format.
pretty::Union{Integer,Bool}=false: Controls pretty printing of the JSON output. Iftrue, the output will be pretty-printed with 2 spaces of indentation. If an integer, it will be used as the number of spaces of indentation. Iffalseor0, the output will be compact. Note: Pretty printing is not supported whenjsonlines=true.inline_limit::Int=0: For arrays shorter than this limit, pretty printing will be disabled (indentation set to 0).ninf::String="-Infinity": Custom string representation for negative infinity.inf::String="Infinity": Custom string representation for positive infinity.nan::String="NaN": Custom string representation for NaN.float_style::Symbol=:shortest: Controls how floating-point numbers are formatted. Options are::shortest: Use the shortest representation that preserves the value:fixed: Use fixed-point notation:exp: Use exponential notation
float_precision::Int=1: Number of decimal places to use whenfloat_styleis:fixedor:exp.bufsize::Int=2^22: Buffer size in bytes for IO operations. When writing to IO, the buffer will be flushed to the IO stream once it reaches this size. This helps control memory usage during large write operations. Default is 4MB (2^22 bytes). This parameter is ignored when returning a String.style::JSONStyle=JSONWriteStyle(): Custom style object that controls serialization behavior. This allows customizing certain aspects of serialization, like defining a customlowermethod for a non-owned type. Likestruct MyStyle <: JSONStyle end,JSON.lower(x::Rational) = (num=x.num, den=x.den), then callingJSON.json(1//3; style=MyStyle())will output{"num": 1, "den": 3}.
By default, x must be a JSON-serializable object. Supported types include:
AbstractString=> JSON string: types must support theAbstractStringinterface, specifically with support forncodeunitsandcodeunit(x, i).Bool=> JSON boolean: must betrueorfalseNothing=> JSON null: must be thenothingsingleton valueNumber=> JSON number:Integersubtypes orUnion{Float16, Float32, Float64}have default implementations for otherNumbertypes,JSON.tostringis first called to convert the value to aStringbefore being written directly to JSON outputAbstractArray/Tuple/AbstractSet=> JSON array: objects for whichJSON.arraylikereturnstrueare output as JSON arrays.arraylikeis defined by default forAbstractArray,AbstractSet,Tuple, andBase.Generator. For other types that define, they must also properly implementStructUtils.applyeachto iterate over the index => elements pairs. Note that arrays with dimensionality > 1 are written as nested arrays, withNnestings forNdimensions, and the 1st dimension is always the innermost nested JSON array (column-major order).AbstractDict/NamedTuple/structs => JSON object: if a value doesn't fall into any of the above categories, it is output as a JSON object.StructUtils.applyeachis called, which has appropriate implementations forAbstractDict,NamedTuple, and structs, where field names => values are iterated over. Field names can be output with an alternative name via field tag overload, likefield::Type &(json=(name="alternative_name",),)
If an object is not JSON-serializable, an override for JSON.lower can be defined to convert it to a JSON-serializable object. Some default lower defintions are defined in JSON itself, for example:
StructUtils.lower(::Missing) = nothingStructUtils.lower(x::Symbol) = String(x)StructUtils.lower(x::Union{Enum, AbstractChar, VersionNumber, Cstring, Cwstring, UUID, Dates.TimeType}) = string(x)StructUtils.lower(x::Regex) = x.pattern
These allow common Base/stdlib types to be serialized in an expected format.
Circular references are tracked automatically and cycles are broken by writing null for any children references.
For pre-formatted JSON data as a String, use JSONText(json) to write the string out as-is.
For AbstractDict objects with non-string keys, StructUtils.lowerkey will be called before serializing. This allows aggregate or other types of dict keys to be converted to an appropriate string representation. See StructUtils.liftkey for the reverse operation, which is called when parsing JSON data back into a dict type.
NOTE: JSON.json should not be overloaded directly by custom types as this isn't robust for various output options (IO, String, etc.) nor recursive situations. Types should define an appropriate JSON.lower definition instead.
NOTE: JSON.json(str, indent::Integer) is special-cased for backwards compatibility with pre-1.0 JSON.jl, as this typically would mean "write out the indent integer to file str". As writing out a single integer to a file is extremely rare, it was decided to keep the pre-1.0 behavior for compatibility reasons.
Examples:
using Dates
abstract type AbstractMonster end
struct Dracula <: AbstractMonster
num_victims::Int
end
struct Werewolf <: AbstractMonster
witching_hour::DateTime
end
struct Percent <: Number
value::Float64
end
JSON.lower(x::Percent) = x.value
StructUtils.lowerkey(x::Percent) = string(x.value)
@noarg mutable struct FrankenStruct
id::Int
name::String # no default to show serialization of an undefined field
address::Union{Nothing, String} = nothing
rate::Union{Missing, Float64} = missing
type::Symbol = :a &(json=(name="franken_type",),)
notsure::Any = JSON.Object("key" => "value")
monster::AbstractMonster = Dracula(10) &(json=(lower=x -> x isa Dracula ? (monster_type="vampire", num_victims=x.num_victims) : (monster_type="werewolf", witching_hour=x.witching_hour),),)
percent::Percent = Percent(0.5)
birthdate::Date = Date(2025, 1, 1) &(json=(dateformat="yyyy/mm/dd",),)
percentages::Dict{Percent, Int} = Dict{Percent, Int}(Percent(0.0) => 0, Percent(1.0) => 1)
json_properties::JSONText = JSONText("{"key": "value"}")
matrix::Matrix{Float64} = [1.0 2.0; 3.0 4.0]
extra_field::Any = nothing &(json=(ignore=true,),)
end
franken = FrankenStruct()
franken.id = 1
json = JSON.json(franken; omit_null=false)
# "{"id":1,"name":null,"address":null,"rate":null,"franken_type":"a","notsure":{"key":"value"},"monster":{"monster_type":"vampire","num_victims":10},"percent":0.5,"birthdate":"2025/01/01","percentages":{"1.0":1,"0.0":0},"json_properties":{"key": "value"},"matrix":[[1.0,3.0],[2.0,4.0]]}"A few comments on the JSON produced in the example above:
- The
namefield was#undef, and thus was serialized asnull. - The
addressandratefields werenothingandmissing, respectively, and thus were serialized asnull. - The
typefield has anamefield tag, so the JSON key for this field isfranken_typeinstead oftype. - The
notsurefield is aJSON.Object, so it is serialized as a JSON object. - The
monsterfield is aAbstractMonster, which is a custom type. It has alowerfield tag that specifies how the value of this field specifically (not all AbstractMonster) should be serialized - The
percentfield is aPercent, which is a custom type. It has alowermethod that specifies howPercentvalues should be serialized - The
birthdatefield has adateformatfield tag, so the value follows the format (yyyy/mm/dd) instead of the default date ISO format (yyyy-mm-dd) - The
percentagesfield is aDict{Percent, Int}, which is a custom type. It has alowerkeymethod that specifies howPercentkeys should be serialized as strings - The
json_propertiesfield is aJSONText, so the JSONText value is serialized as-is - The
matrixfield is aMatrix{Float64}, which is a custom type. It is serialized as a JSON array, with the first dimension being the innermost nested JSON array (column-major order) - The
extra_fieldfield has aignorefield tag, so it is skipped when serializing
JSON.tostring — MethodJSON.tostring(x)Overloadable function that allows non-Integer Number types to convert themselves to a String that is then used when serializing x to JSON. Note that if the result of tostring is not a valid JSON number, it will be serialized as a JSON string, with double quotes around it.
An example overload would look something like:
JSON.tostring(x::MyDecimal) = string(x)JSON.@omit_empty — Macro@omit_empty struct T ...
@omit_empty TConvenience macro to set omit_empty(::Type{T}) to true for the struct T. Can be used in three ways:
- In front of a struct definition:
@omit_empty struct T ... end - Applied to an existing struct name:
@omit_empty T - Chained with other macros:
@omit_empty @other_macro struct T ... end
JSON.@omit_null — Macro@omit_null struct T ...
@omit_null TConvenience macro to set omit_null(::Type{T}) to true for the struct T. Can be used in three ways:
- In front of a struct definition:
@omit_null struct T ... end - Applied to an existing struct name:
@omit_null T - Chained with other macros:
@omit_null @defaults struct T ... end
The macro automatically handles complex macro expansions by walking the expression tree to find struct definitions, making it compatible with macros like StructUtils.@defaults.
Examples
# Method 1: Struct annotation
@omit_null struct Person
name::String
email::Union{Nothing, String}
end
# Method 2: Apply to existing struct
struct User
id::Int
profile::Union{Nothing, String}
end
@omit_null User
# Method 3: Chain with @defaults
@omit_null @defaults struct Employee
name::String = "Anonymous"
manager::Union{Nothing, String} = nothing
end