API Reference

JSON.JSONTextType
JSON.JSONText

Wrapper around a string containing JSON data. Can be used to insert raw JSON in JSON output, like:

json(JSONText("{"key": "value"}"))

This will output the JSON as-is, without escaping. Note that no check is done to ensure that the JSON is valid.

Can also be used to read "raw JSON" when parsing, meaning no specialized structure (JSON.Object, Vector{Any}, etc.) is created. Example:

x = JSON.parse("[1,2,3]", JSONText)
# x.value == "[1,2,3]"
source
JSON.LazyValueType
JSON.LazyValue

A lazy representation of a JSON value. The LazyValue type supports the "selection" syntax for lazily navigating the JSON value. Lazy values can be materialized via JSON.parse(x), JSON.parse(x, T), or JSON.parse!(x, y).

source
JSON.isvalidjsonFunction
JSON.isvalidjson(json) -> Bool

Check if the given JSON is valid. This function will return true if the JSON is valid, and false otherwise. Inputs can be a string, a vector of bytes, or an IO stream, the same inputs as supported for JSON.lazy and JSON.parse.

source
JSON.jsonFunction
JSON.json(x) -> String
JSON.json(io, x)
JSON.json(file_name, x)

Serialize x to JSON format. The 1st method takes just the object and returns a String. In the 2nd method, io is an IO object, and the JSON output will be written to it. For the 3rd method, file_name is a String, a file will be opened and the JSON output will be written to it.

All methods accept the following keyword arguments:

  • omit_null::Union{Bool, Nothing}=nothing: Controls whether struct fields that are undefined or are nothing are included in the JSON output. If true, only non-null fields are written. If false, all fields are included regardless of being undefined or nothing. If nothing, the behavior is determined by JSON.omit_null(::Type{T}), which is false by default.

  • omit_empty::Union{Bool, Nothing}=nothing: Controls whether struct fields that are empty are included in the JSON output. If true, empty fields are excluded. If false, empty fields are included. If nothing, the behavior is determined by JSON.omit_empty(::Type{T}).

  • allownan::Bool=false: If true, allow Inf, -Inf, and NaN in the output. If false, throw an error if Inf, -Inf, or NaN is encountered.

  • jsonlines::Bool=false: If true, input must be array-like and the output will be written in the JSON Lines format, where each element of the array is written on a separate line (i.e. separated by a single newline character `

). Iffalse`, the output will be written in the standard JSON format.

  • pretty::Union{Integer,Bool}=false: Controls pretty printing of the JSON output. If true, the output will be pretty-printed with 2 spaces of indentation. If an integer, it will be used as the number of spaces of indentation. If false or 0, the output will be compact. Note: Pretty printing is not supported when jsonlines=true.

  • inline_limit::Int=0: For arrays shorter than this limit, pretty printing will be disabled (indentation set to 0).

  • ninf::String="-Infinity": Custom string representation for negative infinity.

  • inf::String="Infinity": Custom string representation for positive infinity.

  • nan::String="NaN": Custom string representation for NaN.

  • float_style::Symbol=:shortest: Controls how floating-point numbers are formatted. Options are:

    • :shortest: Use the shortest representation that preserves the value
    • :fixed: Use fixed-point notation
    • :exp: Use exponential notation
  • float_precision::Int=1: Number of decimal places to use when float_style is :fixed or :exp.

  • bufsize::Int=2^22: Buffer size in bytes for IO operations. When writing to IO, the buffer will be flushed to the IO stream once it reaches this size. This helps control memory usage during large write operations. Default is 4MB (2^22 bytes). This parameter is ignored when returning a String.

  • style::JSONStyle=JSONWriteStyle(): Custom style object that controls serialization behavior. This allows customizing certain aspects of serialization, like defining a custom lower method for a non-owned type. Like struct MyStyle <: JSONStyle end, JSON.lower(x::Rational) = (num=x.num, den=x.den), then calling JSON.json(1//3; style=MyStyle()) will output {"num": 1, "den": 3}.

By default, x must be a JSON-serializable object. Supported types include:

  • AbstractString => JSON string: types must support the AbstractString interface, specifically with support for ncodeunits and codeunit(x, i).
  • Bool => JSON boolean: must be true or false
  • Nothing => JSON null: must be the nothing singleton value
  • Number => JSON number: Integer subtypes or Union{Float16, Float32, Float64} have default implementations for other Number types, JSON.tostring is first called to convert the value to a String before being written directly to JSON output
  • AbstractArray/Tuple/AbstractSet => JSON array: objects for which JSON.arraylike returns true are output as JSON arrays. arraylike is defined by default for AbstractArray, AbstractSet, Tuple, and Base.Generator. For other types that define, they must also properly implement StructUtils.applyeach to iterate over the index => elements pairs. Note that arrays with dimensionality > 1 are written as nested arrays, with N nestings for N dimensions, and the 1st dimension is always the innermost nested JSON array (column-major order).
  • AbstractDict/NamedTuple/structs => JSON object: if a value doesn't fall into any of the above categories, it is output as a JSON object. StructUtils.applyeach is called, which has appropriate implementations for AbstractDict, NamedTuple, and structs, where field names => values are iterated over. Field names can be output with an alternative name via field tag overload, like field::Type &(json=(name="alternative_name",),)

If an object is not JSON-serializable, an override for JSON.lower can be defined to convert it to a JSON-serializable object. Some default lower defintions are defined in JSON itself, for example:

  • StructUtils.lower(::Missing) = nothing
  • StructUtils.lower(x::Symbol) = String(x)
  • StructUtils.lower(x::Union{Enum, AbstractChar, VersionNumber, Cstring, Cwstring, UUID, Dates.TimeType}) = string(x)
  • StructUtils.lower(x::Regex) = x.pattern

These allow common Base/stdlib types to be serialized in an expected format.

Circular references are tracked automatically and cycles are broken by writing null for any children references.

For pre-formatted JSON data as a String, use JSONText(json) to write the string out as-is.

For AbstractDict objects with non-string keys, StructUtils.lowerkey will be called before serializing. This allows aggregate or other types of dict keys to be converted to an appropriate string representation. See StructUtils.liftkey for the reverse operation, which is called when parsing JSON data back into a dict type.

NOTE: JSON.json should not be overloaded directly by custom types as this isn't robust for various output options (IO, String, etc.) nor recursive situations. Types should define an appropriate JSON.lower definition instead.

NOTE: JSON.json(str, indent::Integer) is special-cased for backwards compatibility with pre-1.0 JSON.jl, as this typically would mean "write out the indent integer to file str". As writing out a single integer to a file is extremely rare, it was decided to keep the pre-1.0 behavior for compatibility reasons.

Examples:

using Dates

abstract type AbstractMonster end

struct Dracula <: AbstractMonster
    num_victims::Int
end

struct Werewolf <: AbstractMonster
    witching_hour::DateTime
end

struct Percent <: Number
    value::Float64
end

JSON.lower(x::Percent) = x.value
StructUtils.lowerkey(x::Percent) = string(x.value)

@noarg mutable struct FrankenStruct
    id::Int
    name::String # no default to show serialization of an undefined field
    address::Union{Nothing, String} = nothing
    rate::Union{Missing, Float64} = missing
    type::Symbol = :a &(json=(name="franken_type",),)
    notsure::Any = JSON.Object("key" => "value")
    monster::AbstractMonster = Dracula(10) &(json=(lower=x -> x isa Dracula ? (monster_type="vampire", num_victims=x.num_victims) : (monster_type="werewolf", witching_hour=x.witching_hour),),)
    percent::Percent = Percent(0.5)
    birthdate::Date = Date(2025, 1, 1) &(json=(dateformat="yyyy/mm/dd",),)
    percentages::Dict{Percent, Int} = Dict{Percent, Int}(Percent(0.0) => 0, Percent(1.0) => 1)
    json_properties::JSONText = JSONText("{"key": "value"}")
    matrix::Matrix{Float64} = [1.0 2.0; 3.0 4.0]
    extra_field::Any = nothing &(json=(ignore=true,),)
end

franken = FrankenStruct()
franken.id = 1

json = JSON.json(franken; omit_null=false)
# "{"id":1,"name":null,"address":null,"rate":null,"franken_type":"a","notsure":{"key":"value"},"monster":{"monster_type":"vampire","num_victims":10},"percent":0.5,"birthdate":"2025/01/01","percentages":{"1.0":1,"0.0":0},"json_properties":{"key": "value"},"matrix":[[1.0,3.0],[2.0,4.0]]}"

A few comments on the JSON produced in the example above:

  • The name field was #undef, and thus was serialized as null.
  • The address and rate fields were nothing and missing, respectively, and thus were serialized as null.
  • The type field has a name field tag, so the JSON key for this field is franken_type instead of type.
  • The notsure field is a JSON.Object, so it is serialized as a JSON object.
  • The monster field is a AbstractMonster, which is a custom type. It has a lower field tag that specifies how the value of this field specifically (not all AbstractMonster) should be serialized
  • The percent field is a Percent, which is a custom type. It has a lower method that specifies how Percent values should be serialized
  • The birthdate field has a dateformat field tag, so the value follows the format (yyyy/mm/dd) instead of the default date ISO format (yyyy-mm-dd)
  • The percentages field is a Dict{Percent, Int}, which is a custom type. It has a lowerkey method that specifies how Percent keys should be serialized as strings
  • The json_properties field is a JSONText, so the JSONText value is serialized as-is
  • The matrix field is a Matrix{Float64}, which is a custom type. It is serialized as a JSON array, with the first dimension being the innermost nested JSON array (column-major order)
  • The extra_field field has a ignore field tag, so it is skipped when serializing
source
JSON.lazyFunction
JSON.lazy(json; kw...)
JSON.lazyfile(file; kw...)

Detect the initial JSON value in json, returning a JSON.LazyValue instance. json input can be:

  • AbstractString
  • AbstractVector{UInt8}
  • IO, IOStream, Cmd (bytes are fully read into a Vector{UInt8} for parsing, i.e. read(json) is called)

lazyfile is a convenience method that takes a filename and opens the file before calling lazy.

The JSON.LazyValue supports the "selection" syntax for lazily navigating the JSON value. For example (x = JSON.lazy(json)):

  • x.key, x[:key] or x["key"] for JSON objects
  • x[1], x[2:3], x[end] for JSON arrays
  • propertynames(x) to see all keys in the JSON object
  • x.a.b.c for selecting deeply nested values
  • x[~, (k, v) -> k == "foo"] for recursively searching for key "foo" and return matching values

NOTE: Selecting values from a LazyValue will always return a LazyValue. Selecting a specific key of an object or index of an array will only parse what is necessary before returning. This leads to a few conclusions about how to effectively utilize LazyValue:

  • JSON.lazy is great for one-time access of a value in JSON
  • It's also great for finding a required deeply nested value
  • It's not great for any case where repeated access to values is required; this results in the same JSON being parsed on each access (i.e. naively iterating a lazy JSON array will be O(n^2))
  • Best practice is to use JSON.lazy sparingly unless there's a specific case where it will benefit; or use JSON.lazy as a means to access a value that is then fully materialized

Another option for processing JSON.LazyValue is calling foreach(f, x) which is defined on JSON.LazyValue for JSON objects and arrays. For objects, f should be of the form f(kv::Pair{String, LazyValue}) where kv is a key-value pair, and for arrays, f(v::LazyValue) where v is the value at the index. This allows for iterating over all key-value pairs in an object or all values in an array without materializing the entire structure.

Lazy values can be materialized via JSON.parse in a few different forms:

  • JSON.parse(json): Default materialization into JSON.Object (a Dict-like type), Vector{Any}, etc.
  • JSON.parse(json, T): Materialize into a user-provided type T (following rules/programmatic construction from StructUtils.jl)
  • JSON.parse!(json, x): Materialize into an existing object x (following rules/programmatic construction from StructUtils.jl)

Thus for completeness sake, here's an example of ideal usage of JSON.lazy:

x = JSON.lazy(very_large_json_object)
# find a deeply nested value
y = x.a.b.c.d.e.f.g
# materialize the value
z = JSON.parse(y)
# now mutate/repeatedly access values in z

In this example, we only parsed as much of the very_large_json_object as was required to find the value y. Then we fully materialized y into z, which is now a normal Julia object. We can now mutate or access values in z.

Currently supported keyword arguments include:

  • allownan::Bool = false: whether "special" float values shoudl be allowed while parsing (NaN, Inf, -Inf); these values are specifically not allowed in the JSON spec, but many JSON libraries allow reading/writing
  • ninf::String = "-Infinity": the string that will be used to parse -Inf if allownan=true
  • inf::String = "Infinity": the string that will be used to parse Inf if allownan=true
  • nan::String = "NaN": the string that will be sued to parse NaN if allownan=true
  • jsonlines::Bool = false: whether the JSON input should be treated as an implicit array, with newlines separating individual JSON elements with no leading '[' or trailing ']' characters. Common in logging or streaming workflows. Defaults to true when used with JSON.parsefile and the filename extension is .jsonl or ndjson. Note this ensures that parsing will always return an array at the root-level.

Note that validation is only fully done on null, true, and false, while other values are only lazily inferred from the first non-whitespace character:

  • '{': JSON object
  • '[': JSON array
  • '"': JSON string
  • '0'-'9' or '-': JSON number

Further validation for these values is done later when materialized, like JSON.parse, or via selection syntax calls on a LazyValue.

source
JSON.lazyfileFunction
JSON.lazy(json; kw...)
JSON.lazyfile(file; kw...)

Detect the initial JSON value in json, returning a JSON.LazyValue instance. json input can be:

  • AbstractString
  • AbstractVector{UInt8}
  • IO, IOStream, Cmd (bytes are fully read into a Vector{UInt8} for parsing, i.e. read(json) is called)

lazyfile is a convenience method that takes a filename and opens the file before calling lazy.

The JSON.LazyValue supports the "selection" syntax for lazily navigating the JSON value. For example (x = JSON.lazy(json)):

  • x.key, x[:key] or x["key"] for JSON objects
  • x[1], x[2:3], x[end] for JSON arrays
  • propertynames(x) to see all keys in the JSON object
  • x.a.b.c for selecting deeply nested values
  • x[~, (k, v) -> k == "foo"] for recursively searching for key "foo" and return matching values

NOTE: Selecting values from a LazyValue will always return a LazyValue. Selecting a specific key of an object or index of an array will only parse what is necessary before returning. This leads to a few conclusions about how to effectively utilize LazyValue:

  • JSON.lazy is great for one-time access of a value in JSON
  • It's also great for finding a required deeply nested value
  • It's not great for any case where repeated access to values is required; this results in the same JSON being parsed on each access (i.e. naively iterating a lazy JSON array will be O(n^2))
  • Best practice is to use JSON.lazy sparingly unless there's a specific case where it will benefit; or use JSON.lazy as a means to access a value that is then fully materialized

Another option for processing JSON.LazyValue is calling foreach(f, x) which is defined on JSON.LazyValue for JSON objects and arrays. For objects, f should be of the form f(kv::Pair{String, LazyValue}) where kv is a key-value pair, and for arrays, f(v::LazyValue) where v is the value at the index. This allows for iterating over all key-value pairs in an object or all values in an array without materializing the entire structure.

Lazy values can be materialized via JSON.parse in a few different forms:

  • JSON.parse(json): Default materialization into JSON.Object (a Dict-like type), Vector{Any}, etc.
  • JSON.parse(json, T): Materialize into a user-provided type T (following rules/programmatic construction from StructUtils.jl)
  • JSON.parse!(json, x): Materialize into an existing object x (following rules/programmatic construction from StructUtils.jl)

Thus for completeness sake, here's an example of ideal usage of JSON.lazy:

x = JSON.lazy(very_large_json_object)
# find a deeply nested value
y = x.a.b.c.d.e.f.g
# materialize the value
z = JSON.parse(y)
# now mutate/repeatedly access values in z

In this example, we only parsed as much of the very_large_json_object as was required to find the value y. Then we fully materialized y into z, which is now a normal Julia object. We can now mutate or access values in z.

Currently supported keyword arguments include:

  • allownan::Bool = false: whether "special" float values shoudl be allowed while parsing (NaN, Inf, -Inf); these values are specifically not allowed in the JSON spec, but many JSON libraries allow reading/writing
  • ninf::String = "-Infinity": the string that will be used to parse -Inf if allownan=true
  • inf::String = "Infinity": the string that will be used to parse Inf if allownan=true
  • nan::String = "NaN": the string that will be sued to parse NaN if allownan=true
  • jsonlines::Bool = false: whether the JSON input should be treated as an implicit array, with newlines separating individual JSON elements with no leading '[' or trailing ']' characters. Common in logging or streaming workflows. Defaults to true when used with JSON.parsefile and the filename extension is .jsonl or ndjson. Note this ensures that parsing will always return an array at the root-level.

Note that validation is only fully done on null, true, and false, while other values are only lazily inferred from the first non-whitespace character:

  • '{': JSON object
  • '[': JSON array
  • '"': JSON string
  • '0'-'9' or '-': JSON number

Further validation for these values is done later when materialized, like JSON.parse, or via selection syntax calls on a LazyValue.

source
JSON.omit_emptyMethod
JSON.omit_empty(::Type{T})::Bool
JSON.omit_empty(::JSONStyle, ::Type{T})::Bool

Controls whether struct fields that are empty are included in the JSON output. Returns false by default, meaning empty fields are included. To instead exclude empty fields, set this to true. A field is considered empty if it is nothing, an empty collection (empty array, dict, string, tuple, or named tuple), or missing. This can also be controlled via the omit_empty keyword argument in JSON.json.

# Override for a specific type
JSON.omit_empty(::Type{MyStruct}) = true

# Override for a custom style
struct MyStyle <: JSON.JSONStyle end
JSON.omit_empty(::MyStyle, ::Type{T}) where {T} = true
source
JSON.omit_nullMethod
JSON.omit_null(::Type{T})::Bool
JSON.omit_null(::JSONStyle, ::Type{T})::Bool

Controls whether struct fields that are undefined or are nothing are included in the JSON output. Returns false by default, meaning all fields are included, regardless of undef or nothing. To instead ensure only non-null fields are written, set this to true. This can also be controlled via the omit_null keyword argument in JSON.json.

# Override for a specific type
JSON.omit_null(::Type{MyStruct}) = true

# Override for a custom style
struct MyStyle <: JSON.JSONStyle end
JSON.omit_null(::MyStyle, ::Type{T}) where {T} = true
source
JSON.parseFunction
JSON.parse(json)
JSON.parse(json, T)
JSON.parse!(json, x)
JSON.parsefile(filename)
JSON.parsefile(filename, T)
JSON.parsefile!(filename, x)

Parse a JSON input (string, vector, stream, LazyValue, etc.) into a Julia value. The parsefile variants take a filename, open the file, and pass the IOStream to parse.

Currently supported keyword arguments include:

  • allownan: allows parsing NaN, Inf, and -Inf since they are otherwise invalid JSON
  • ninf: string to use for -Inf (default: "-Infinity")
  • inf: string to use for Inf (default: "Infinity")
  • nan: string to use for NaN (default: "NaN")
  • jsonlines: treat the json input as an implicit JSON array, delimited by newlines, each element being parsed from each row/line in the input
  • dicttype: a custom AbstractDict type to use instead of JSON.Object{String, Any} as the default type for JSON object materialization
  • null: a custom value to use for JSON null values (default: nothing)
  • style: a custom StructUtils.StructStyle subtype instance to be used in calls to StructUtils.make and StructUtils.lift. This allows overriding default behaviors for non-owned types.

The methods without a type specified (JSON.parse(json), JSON.parsefile(filename)), do a generic materialization into predefined default types, including:

  • JSON object => JSON.Object{String, Any} (see note below)
  • JSON array => Vector{Any}
  • JSON string => String
  • JSON number => Int64, BigInt, Float64, or BigFloat
  • JSON true => true
  • JSON false => false
  • JSON null => nothing

When a type T is specified (JSON.parse(json, T), JSON.parsefile(filename, T)), materialization to a value of type T will be attempted utilizing machinery and interfaces provided by the StructUtils.jl package, including:

  • For JSON objects, JSON keys will be matched against field names of T with a value being constructed via T(args...)
  • If T was defined with the @noarg macro, an empty instance will be constructed, and field values set as JSON keys match field names
  • If T had default field values defined using the @defaults or @kwarg macros (from StructUtils.jl package), those will be set in the value of T unless different values are parsed from the JSON
  • If T was defined with the @nonstruct macro, the struct will be treated as a primitive type and constructed using the lift function rather than from field values
  • JSON keys that don't match field names in T will be ignored (skipped over)
  • If a field in T has a name fieldtag, the name value will be used to match JSON keys instead
  • If T or any recursive field type of T is abstract, an appropriate JSON.@choosetype T x -> ... definition should exist for "choosing" a concrete type at runtime; default type choosing exists for Union{T, Missing} and Union{T, Nothing} where the JSON value is checked if null. If the Any type is encountered, the default materialization types will be used (JSON.Object, Vector{Any}, etc.)
  • For any non-JSON-standard non-aggregate (i.e. non-object, non-array) field type of T, a JSON.lift(::Type{T}, x) = ... definition can be defined for how to "lift" the default JSON value (String, Number, Bool, nothing) to the type T; a default lift definition exists, for example, for JSON.lift(::Type{Missing}, x) = missing where the standard JSON value for null is nothing and it can be "lifted" to missing
  • For any T or recursive field type of T that is AbstractDict, non-string/symbol/integer keys will need to have a StructUtils.liftkey(::Type{T}, x)) definition for how to "lift" the JSON string key to the key type of T

For any T or recursive field type of T that is JSON.JSONText, the next full raw JSON value will be preserved in the JSONText wrapper as-is.

For the unique case of nested JSON arrays and prior knowledge of the expected dimensionality, a target type T can be given as an AbstractArray{T, N} subtype. In this case, the JSON array data is materialized as an n-dimensional array, where: the number of JSON array nestings must match the Julia array dimensionality (N), nested JSON arrays at matching depths are assumed to have equal lengths, and the length of the innermost JSON array is the 1st dimension length and so on. For example, the JSON array [[[1.0,2.0]]] would be materialized as a 3-dimensional array of Float64 with sizes (2, 1, 1), when called like JSON.parse("[[[1.0,2.0]]]", Array{Float64, 3}). Note that n-dimensional Julia arrays are written to json as nested JSON arrays by default, to enable lossless re-parsing, though the dimensionality must still be provided explicitly to the call to parse (i.e. default parsing via JSON.parse(json) will result in plain nested Vector{Any}s returned).

Examples:

using Dates

abstract type AbstractMonster end

struct Dracula <: AbstractMonster
    num_victims::Int
end

struct Werewolf <: AbstractMonster
    witching_hour::DateTime
end

JSON.@choosetype AbstractMonster x -> x.monster_type[] == "vampire" ? Dracula : Werewolf

struct Percent <: Number
    value::Float64
end

JSON.lift(::Type{Percent}, x) = Percent(Float64(x))
StructUtils.liftkey(::Type{Percent}, x::String) = Percent(parse(Float64, x))

@defaults struct FrankenStruct
    id::Int = 0
    name::String = "Jim"
    address::Union{Nothing, String} = nothing
    rate::Union{Missing, Float64} = missing
    type::Symbol = :a &(json=(name="franken_type",),)
    notsure::Any = nothing
    monster::AbstractMonster = Dracula(0)
    percent::Percent = Percent(0.0)
    birthdate::Date = Date(0) &(json=(dateformat="yyyy/mm/dd",),)
    percentages::Dict{Percent, Int} = Dict{Percent, Int}()
    json_properties::JSONText = JSONText("")
    matrix::Matrix{Float64} = Matrix{Float64}(undef, 0, 0)
end

json = """
{
    "id": 1,
    "address": "123 Main St",
    "rate": null,
    "franken_type": "b",
    "notsure": {"key": "value"},
    "monster": {
        "monster_type": "vampire",
        "num_victims": 10
    },
    "percent": 0.1,
    "birthdate": "2023/10/01",
    "percentages": {
        "0.1": 1,
        "0.2": 2
    },
    "json_properties": {"key": "value"},
    "matrix": [[1.0, 2.0], [3.0, 4.0]],
    "extra_key": "extra_value"
}
"""
JSON.parse(json, FrankenStruct)
# FrankenStruct(1, "Jim", "123 Main St", missing, :b, JSON.Object{String, Any}("key" => "value"), Dracula(10), Percent(0.1), Date("2023-10-01"), Dict{Percent, Int64}(Percent(0.2) => 2, Percent(0.1) => 1), JSONText("{"key": "value"}"), [1.0 3.0; 2.0 4.0])

Let's walk through some notable features of the example above:

  • The name field isn't present in the JSON input, so the default value of "Jim" is used.
  • The address field uses a default @choosetype to determine that the JSON value is not null, so a String should be parsed for the field value.
  • The rate field has a null JSON value, so the default @choosetype recognizes it should be "lifted" to Missing, which then uses a predefined lift definition for Missing.
  • The type field is a Symbol, and has a fieldtag json=(name="franken_type",) which means the JSON key franken_type will be used to set the field value instead of the default type field name. A default lift definition for Symbol is used to convert the JSON string value to a Symbol.
  • The notsure field is of type Any, so the default object type JSON.Object{String, Any} is used to materialize the JSON value.
  • The monster field is a polymorphic type, and the JSON value has a monster_type key that determines which concrete type to use. The @choosetype macro is used to define the logic for choosing the concrete type based on the JSON input. Note that teh x in @choosetype is a LazyValue, so we materialize via x.monster_type[] in order to compare with the string "vampire".
  • The percent field is a custom type Percent and the JSON.lift defines how to construct a Percent from the JSON value, which is a Float64 in this case.
  • The birthdate field uses a custom date format for parsing, specified in the JSON input.
  • The percentages field is a dictionary with keys of type Percent, which is a custom type. The liftkey function is defined to convert the JSON string keys to Percent types (parses the Float64 manually)
  • The json_properties field has a type of JSONText, which means the raw JSON will be preserved as a String of the JSONText type.
  • The matrix field is a Matrix{Float64}, so the JSON input array-of-arrays are materialized as such.
  • The extra_key field is not defined in the FrankenStruct type, so it is ignored and skipped over.

NOTE: Why use JSON.Object{String, Any} as the default object type? It provides several benefits:

  • Behaves as a drop-in replacement for Dict{String, Any}, so no loss of functionality
  • Performance! It's internal representation means memory savings and faster construction for small objects typical in JSON (vs Dict)
  • Insertion order is preserved, so the order of keys in the JSON input is preserved in JSON.Object
  • Convenient getproperty (i.e. obj.key) syntax is supported, even for Object{String,Any} key types (again ideal/specialized for JSON usage)

JSON.Object internal representation uses a linked list, thus key lookups are linear time (O(n)). For large JSON objects, (hundreds or thousands of keys), consider using a Dict{String, Any} instead, like JSON.parse(json; dicttype=Dict{String, Any}).

source
JSON.parsefileFunction
JSON.parse(json)
JSON.parse(json, T)
JSON.parse!(json, x)
JSON.parsefile(filename)
JSON.parsefile(filename, T)
JSON.parsefile!(filename, x)

Parse a JSON input (string, vector, stream, LazyValue, etc.) into a Julia value. The parsefile variants take a filename, open the file, and pass the IOStream to parse.

Currently supported keyword arguments include:

  • allownan: allows parsing NaN, Inf, and -Inf since they are otherwise invalid JSON
  • ninf: string to use for -Inf (default: "-Infinity")
  • inf: string to use for Inf (default: "Infinity")
  • nan: string to use for NaN (default: "NaN")
  • jsonlines: treat the json input as an implicit JSON array, delimited by newlines, each element being parsed from each row/line in the input
  • dicttype: a custom AbstractDict type to use instead of JSON.Object{String, Any} as the default type for JSON object materialization
  • null: a custom value to use for JSON null values (default: nothing)
  • style: a custom StructUtils.StructStyle subtype instance to be used in calls to StructUtils.make and StructUtils.lift. This allows overriding default behaviors for non-owned types.

The methods without a type specified (JSON.parse(json), JSON.parsefile(filename)), do a generic materialization into predefined default types, including:

  • JSON object => JSON.Object{String, Any} (see note below)
  • JSON array => Vector{Any}
  • JSON string => String
  • JSON number => Int64, BigInt, Float64, or BigFloat
  • JSON true => true
  • JSON false => false
  • JSON null => nothing

When a type T is specified (JSON.parse(json, T), JSON.parsefile(filename, T)), materialization to a value of type T will be attempted utilizing machinery and interfaces provided by the StructUtils.jl package, including:

  • For JSON objects, JSON keys will be matched against field names of T with a value being constructed via T(args...)
  • If T was defined with the @noarg macro, an empty instance will be constructed, and field values set as JSON keys match field names
  • If T had default field values defined using the @defaults or @kwarg macros (from StructUtils.jl package), those will be set in the value of T unless different values are parsed from the JSON
  • If T was defined with the @nonstruct macro, the struct will be treated as a primitive type and constructed using the lift function rather than from field values
  • JSON keys that don't match field names in T will be ignored (skipped over)
  • If a field in T has a name fieldtag, the name value will be used to match JSON keys instead
  • If T or any recursive field type of T is abstract, an appropriate JSON.@choosetype T x -> ... definition should exist for "choosing" a concrete type at runtime; default type choosing exists for Union{T, Missing} and Union{T, Nothing} where the JSON value is checked if null. If the Any type is encountered, the default materialization types will be used (JSON.Object, Vector{Any}, etc.)
  • For any non-JSON-standard non-aggregate (i.e. non-object, non-array) field type of T, a JSON.lift(::Type{T}, x) = ... definition can be defined for how to "lift" the default JSON value (String, Number, Bool, nothing) to the type T; a default lift definition exists, for example, for JSON.lift(::Type{Missing}, x) = missing where the standard JSON value for null is nothing and it can be "lifted" to missing
  • For any T or recursive field type of T that is AbstractDict, non-string/symbol/integer keys will need to have a StructUtils.liftkey(::Type{T}, x)) definition for how to "lift" the JSON string key to the key type of T

For any T or recursive field type of T that is JSON.JSONText, the next full raw JSON value will be preserved in the JSONText wrapper as-is.

For the unique case of nested JSON arrays and prior knowledge of the expected dimensionality, a target type T can be given as an AbstractArray{T, N} subtype. In this case, the JSON array data is materialized as an n-dimensional array, where: the number of JSON array nestings must match the Julia array dimensionality (N), nested JSON arrays at matching depths are assumed to have equal lengths, and the length of the innermost JSON array is the 1st dimension length and so on. For example, the JSON array [[[1.0,2.0]]] would be materialized as a 3-dimensional array of Float64 with sizes (2, 1, 1), when called like JSON.parse("[[[1.0,2.0]]]", Array{Float64, 3}). Note that n-dimensional Julia arrays are written to json as nested JSON arrays by default, to enable lossless re-parsing, though the dimensionality must still be provided explicitly to the call to parse (i.e. default parsing via JSON.parse(json) will result in plain nested Vector{Any}s returned).

Examples:

using Dates

abstract type AbstractMonster end

struct Dracula <: AbstractMonster
    num_victims::Int
end

struct Werewolf <: AbstractMonster
    witching_hour::DateTime
end

JSON.@choosetype AbstractMonster x -> x.monster_type[] == "vampire" ? Dracula : Werewolf

struct Percent <: Number
    value::Float64
end

JSON.lift(::Type{Percent}, x) = Percent(Float64(x))
StructUtils.liftkey(::Type{Percent}, x::String) = Percent(parse(Float64, x))

@defaults struct FrankenStruct
    id::Int = 0
    name::String = "Jim"
    address::Union{Nothing, String} = nothing
    rate::Union{Missing, Float64} = missing
    type::Symbol = :a &(json=(name="franken_type",),)
    notsure::Any = nothing
    monster::AbstractMonster = Dracula(0)
    percent::Percent = Percent(0.0)
    birthdate::Date = Date(0) &(json=(dateformat="yyyy/mm/dd",),)
    percentages::Dict{Percent, Int} = Dict{Percent, Int}()
    json_properties::JSONText = JSONText("")
    matrix::Matrix{Float64} = Matrix{Float64}(undef, 0, 0)
end

json = """
{
    "id": 1,
    "address": "123 Main St",
    "rate": null,
    "franken_type": "b",
    "notsure": {"key": "value"},
    "monster": {
        "monster_type": "vampire",
        "num_victims": 10
    },
    "percent": 0.1,
    "birthdate": "2023/10/01",
    "percentages": {
        "0.1": 1,
        "0.2": 2
    },
    "json_properties": {"key": "value"},
    "matrix": [[1.0, 2.0], [3.0, 4.0]],
    "extra_key": "extra_value"
}
"""
JSON.parse(json, FrankenStruct)
# FrankenStruct(1, "Jim", "123 Main St", missing, :b, JSON.Object{String, Any}("key" => "value"), Dracula(10), Percent(0.1), Date("2023-10-01"), Dict{Percent, Int64}(Percent(0.2) => 2, Percent(0.1) => 1), JSONText("{"key": "value"}"), [1.0 3.0; 2.0 4.0])

Let's walk through some notable features of the example above:

  • The name field isn't present in the JSON input, so the default value of "Jim" is used.
  • The address field uses a default @choosetype to determine that the JSON value is not null, so a String should be parsed for the field value.
  • The rate field has a null JSON value, so the default @choosetype recognizes it should be "lifted" to Missing, which then uses a predefined lift definition for Missing.
  • The type field is a Symbol, and has a fieldtag json=(name="franken_type",) which means the JSON key franken_type will be used to set the field value instead of the default type field name. A default lift definition for Symbol is used to convert the JSON string value to a Symbol.
  • The notsure field is of type Any, so the default object type JSON.Object{String, Any} is used to materialize the JSON value.
  • The monster field is a polymorphic type, and the JSON value has a monster_type key that determines which concrete type to use. The @choosetype macro is used to define the logic for choosing the concrete type based on the JSON input. Note that teh x in @choosetype is a LazyValue, so we materialize via x.monster_type[] in order to compare with the string "vampire".
  • The percent field is a custom type Percent and the JSON.lift defines how to construct a Percent from the JSON value, which is a Float64 in this case.
  • The birthdate field uses a custom date format for parsing, specified in the JSON input.
  • The percentages field is a dictionary with keys of type Percent, which is a custom type. The liftkey function is defined to convert the JSON string keys to Percent types (parses the Float64 manually)
  • The json_properties field has a type of JSONText, which means the raw JSON will be preserved as a String of the JSONText type.
  • The matrix field is a Matrix{Float64}, so the JSON input array-of-arrays are materialized as such.
  • The extra_key field is not defined in the FrankenStruct type, so it is ignored and skipped over.

NOTE: Why use JSON.Object{String, Any} as the default object type? It provides several benefits:

  • Behaves as a drop-in replacement for Dict{String, Any}, so no loss of functionality
  • Performance! It's internal representation means memory savings and faster construction for small objects typical in JSON (vs Dict)
  • Insertion order is preserved, so the order of keys in the JSON input is preserved in JSON.Object
  • Convenient getproperty (i.e. obj.key) syntax is supported, even for Object{String,Any} key types (again ideal/specialized for JSON usage)

JSON.Object internal representation uses a linked list, thus key lookups are linear time (O(n)). For large JSON objects, (hundreds or thousands of keys), consider using a Dict{String, Any} instead, like JSON.parse(json; dicttype=Dict{String, Any}).

source
JSON.parsefile!Function
JSON.parse(json)
JSON.parse(json, T)
JSON.parse!(json, x)
JSON.parsefile(filename)
JSON.parsefile(filename, T)
JSON.parsefile!(filename, x)

Parse a JSON input (string, vector, stream, LazyValue, etc.) into a Julia value. The parsefile variants take a filename, open the file, and pass the IOStream to parse.

Currently supported keyword arguments include:

  • allownan: allows parsing NaN, Inf, and -Inf since they are otherwise invalid JSON
  • ninf: string to use for -Inf (default: "-Infinity")
  • inf: string to use for Inf (default: "Infinity")
  • nan: string to use for NaN (default: "NaN")
  • jsonlines: treat the json input as an implicit JSON array, delimited by newlines, each element being parsed from each row/line in the input
  • dicttype: a custom AbstractDict type to use instead of JSON.Object{String, Any} as the default type for JSON object materialization
  • null: a custom value to use for JSON null values (default: nothing)
  • style: a custom StructUtils.StructStyle subtype instance to be used in calls to StructUtils.make and StructUtils.lift. This allows overriding default behaviors for non-owned types.

The methods without a type specified (JSON.parse(json), JSON.parsefile(filename)), do a generic materialization into predefined default types, including:

  • JSON object => JSON.Object{String, Any} (see note below)
  • JSON array => Vector{Any}
  • JSON string => String
  • JSON number => Int64, BigInt, Float64, or BigFloat
  • JSON true => true
  • JSON false => false
  • JSON null => nothing

When a type T is specified (JSON.parse(json, T), JSON.parsefile(filename, T)), materialization to a value of type T will be attempted utilizing machinery and interfaces provided by the StructUtils.jl package, including:

  • For JSON objects, JSON keys will be matched against field names of T with a value being constructed via T(args...)
  • If T was defined with the @noarg macro, an empty instance will be constructed, and field values set as JSON keys match field names
  • If T had default field values defined using the @defaults or @kwarg macros (from StructUtils.jl package), those will be set in the value of T unless different values are parsed from the JSON
  • If T was defined with the @nonstruct macro, the struct will be treated as a primitive type and constructed using the lift function rather than from field values
  • JSON keys that don't match field names in T will be ignored (skipped over)
  • If a field in T has a name fieldtag, the name value will be used to match JSON keys instead
  • If T or any recursive field type of T is abstract, an appropriate JSON.@choosetype T x -> ... definition should exist for "choosing" a concrete type at runtime; default type choosing exists for Union{T, Missing} and Union{T, Nothing} where the JSON value is checked if null. If the Any type is encountered, the default materialization types will be used (JSON.Object, Vector{Any}, etc.)
  • For any non-JSON-standard non-aggregate (i.e. non-object, non-array) field type of T, a JSON.lift(::Type{T}, x) = ... definition can be defined for how to "lift" the default JSON value (String, Number, Bool, nothing) to the type T; a default lift definition exists, for example, for JSON.lift(::Type{Missing}, x) = missing where the standard JSON value for null is nothing and it can be "lifted" to missing
  • For any T or recursive field type of T that is AbstractDict, non-string/symbol/integer keys will need to have a StructUtils.liftkey(::Type{T}, x)) definition for how to "lift" the JSON string key to the key type of T

For any T or recursive field type of T that is JSON.JSONText, the next full raw JSON value will be preserved in the JSONText wrapper as-is.

For the unique case of nested JSON arrays and prior knowledge of the expected dimensionality, a target type T can be given as an AbstractArray{T, N} subtype. In this case, the JSON array data is materialized as an n-dimensional array, where: the number of JSON array nestings must match the Julia array dimensionality (N), nested JSON arrays at matching depths are assumed to have equal lengths, and the length of the innermost JSON array is the 1st dimension length and so on. For example, the JSON array [[[1.0,2.0]]] would be materialized as a 3-dimensional array of Float64 with sizes (2, 1, 1), when called like JSON.parse("[[[1.0,2.0]]]", Array{Float64, 3}). Note that n-dimensional Julia arrays are written to json as nested JSON arrays by default, to enable lossless re-parsing, though the dimensionality must still be provided explicitly to the call to parse (i.e. default parsing via JSON.parse(json) will result in plain nested Vector{Any}s returned).

Examples:

using Dates

abstract type AbstractMonster end

struct Dracula <: AbstractMonster
    num_victims::Int
end

struct Werewolf <: AbstractMonster
    witching_hour::DateTime
end

JSON.@choosetype AbstractMonster x -> x.monster_type[] == "vampire" ? Dracula : Werewolf

struct Percent <: Number
    value::Float64
end

JSON.lift(::Type{Percent}, x) = Percent(Float64(x))
StructUtils.liftkey(::Type{Percent}, x::String) = Percent(parse(Float64, x))

@defaults struct FrankenStruct
    id::Int = 0
    name::String = "Jim"
    address::Union{Nothing, String} = nothing
    rate::Union{Missing, Float64} = missing
    type::Symbol = :a &(json=(name="franken_type",),)
    notsure::Any = nothing
    monster::AbstractMonster = Dracula(0)
    percent::Percent = Percent(0.0)
    birthdate::Date = Date(0) &(json=(dateformat="yyyy/mm/dd",),)
    percentages::Dict{Percent, Int} = Dict{Percent, Int}()
    json_properties::JSONText = JSONText("")
    matrix::Matrix{Float64} = Matrix{Float64}(undef, 0, 0)
end

json = """
{
    "id": 1,
    "address": "123 Main St",
    "rate": null,
    "franken_type": "b",
    "notsure": {"key": "value"},
    "monster": {
        "monster_type": "vampire",
        "num_victims": 10
    },
    "percent": 0.1,
    "birthdate": "2023/10/01",
    "percentages": {
        "0.1": 1,
        "0.2": 2
    },
    "json_properties": {"key": "value"},
    "matrix": [[1.0, 2.0], [3.0, 4.0]],
    "extra_key": "extra_value"
}
"""
JSON.parse(json, FrankenStruct)
# FrankenStruct(1, "Jim", "123 Main St", missing, :b, JSON.Object{String, Any}("key" => "value"), Dracula(10), Percent(0.1), Date("2023-10-01"), Dict{Percent, Int64}(Percent(0.2) => 2, Percent(0.1) => 1), JSONText("{"key": "value"}"), [1.0 3.0; 2.0 4.0])

Let's walk through some notable features of the example above:

  • The name field isn't present in the JSON input, so the default value of "Jim" is used.
  • The address field uses a default @choosetype to determine that the JSON value is not null, so a String should be parsed for the field value.
  • The rate field has a null JSON value, so the default @choosetype recognizes it should be "lifted" to Missing, which then uses a predefined lift definition for Missing.
  • The type field is a Symbol, and has a fieldtag json=(name="franken_type",) which means the JSON key franken_type will be used to set the field value instead of the default type field name. A default lift definition for Symbol is used to convert the JSON string value to a Symbol.
  • The notsure field is of type Any, so the default object type JSON.Object{String, Any} is used to materialize the JSON value.
  • The monster field is a polymorphic type, and the JSON value has a monster_type key that determines which concrete type to use. The @choosetype macro is used to define the logic for choosing the concrete type based on the JSON input. Note that teh x in @choosetype is a LazyValue, so we materialize via x.monster_type[] in order to compare with the string "vampire".
  • The percent field is a custom type Percent and the JSON.lift defines how to construct a Percent from the JSON value, which is a Float64 in this case.
  • The birthdate field uses a custom date format for parsing, specified in the JSON input.
  • The percentages field is a dictionary with keys of type Percent, which is a custom type. The liftkey function is defined to convert the JSON string keys to Percent types (parses the Float64 manually)
  • The json_properties field has a type of JSONText, which means the raw JSON will be preserved as a String of the JSONText type.
  • The matrix field is a Matrix{Float64}, so the JSON input array-of-arrays are materialized as such.
  • The extra_key field is not defined in the FrankenStruct type, so it is ignored and skipped over.

NOTE: Why use JSON.Object{String, Any} as the default object type? It provides several benefits:

  • Behaves as a drop-in replacement for Dict{String, Any}, so no loss of functionality
  • Performance! It's internal representation means memory savings and faster construction for small objects typical in JSON (vs Dict)
  • Insertion order is preserved, so the order of keys in the JSON input is preserved in JSON.Object
  • Convenient getproperty (i.e. obj.key) syntax is supported, even for Object{String,Any} key types (again ideal/specialized for JSON usage)

JSON.Object internal representation uses a linked list, thus key lookups are linear time (O(n)). For large JSON objects, (hundreds or thousands of keys), consider using a Dict{String, Any} instead, like JSON.parse(json; dicttype=Dict{String, Any}).

source
JSON.printFunction
JSON.json(x) -> String
JSON.json(io, x)
JSON.json(file_name, x)

Serialize x to JSON format. The 1st method takes just the object and returns a String. In the 2nd method, io is an IO object, and the JSON output will be written to it. For the 3rd method, file_name is a String, a file will be opened and the JSON output will be written to it.

All methods accept the following keyword arguments:

  • omit_null::Union{Bool, Nothing}=nothing: Controls whether struct fields that are undefined or are nothing are included in the JSON output. If true, only non-null fields are written. If false, all fields are included regardless of being undefined or nothing. If nothing, the behavior is determined by JSON.omit_null(::Type{T}), which is false by default.

  • omit_empty::Union{Bool, Nothing}=nothing: Controls whether struct fields that are empty are included in the JSON output. If true, empty fields are excluded. If false, empty fields are included. If nothing, the behavior is determined by JSON.omit_empty(::Type{T}).

  • allownan::Bool=false: If true, allow Inf, -Inf, and NaN in the output. If false, throw an error if Inf, -Inf, or NaN is encountered.

  • jsonlines::Bool=false: If true, input must be array-like and the output will be written in the JSON Lines format, where each element of the array is written on a separate line (i.e. separated by a single newline character `

). Iffalse`, the output will be written in the standard JSON format.

  • pretty::Union{Integer,Bool}=false: Controls pretty printing of the JSON output. If true, the output will be pretty-printed with 2 spaces of indentation. If an integer, it will be used as the number of spaces of indentation. If false or 0, the output will be compact. Note: Pretty printing is not supported when jsonlines=true.

  • inline_limit::Int=0: For arrays shorter than this limit, pretty printing will be disabled (indentation set to 0).

  • ninf::String="-Infinity": Custom string representation for negative infinity.

  • inf::String="Infinity": Custom string representation for positive infinity.

  • nan::String="NaN": Custom string representation for NaN.

  • float_style::Symbol=:shortest: Controls how floating-point numbers are formatted. Options are:

    • :shortest: Use the shortest representation that preserves the value
    • :fixed: Use fixed-point notation
    • :exp: Use exponential notation
  • float_precision::Int=1: Number of decimal places to use when float_style is :fixed or :exp.

  • bufsize::Int=2^22: Buffer size in bytes for IO operations. When writing to IO, the buffer will be flushed to the IO stream once it reaches this size. This helps control memory usage during large write operations. Default is 4MB (2^22 bytes). This parameter is ignored when returning a String.

  • style::JSONStyle=JSONWriteStyle(): Custom style object that controls serialization behavior. This allows customizing certain aspects of serialization, like defining a custom lower method for a non-owned type. Like struct MyStyle <: JSONStyle end, JSON.lower(x::Rational) = (num=x.num, den=x.den), then calling JSON.json(1//3; style=MyStyle()) will output {"num": 1, "den": 3}.

By default, x must be a JSON-serializable object. Supported types include:

  • AbstractString => JSON string: types must support the AbstractString interface, specifically with support for ncodeunits and codeunit(x, i).
  • Bool => JSON boolean: must be true or false
  • Nothing => JSON null: must be the nothing singleton value
  • Number => JSON number: Integer subtypes or Union{Float16, Float32, Float64} have default implementations for other Number types, JSON.tostring is first called to convert the value to a String before being written directly to JSON output
  • AbstractArray/Tuple/AbstractSet => JSON array: objects for which JSON.arraylike returns true are output as JSON arrays. arraylike is defined by default for AbstractArray, AbstractSet, Tuple, and Base.Generator. For other types that define, they must also properly implement StructUtils.applyeach to iterate over the index => elements pairs. Note that arrays with dimensionality > 1 are written as nested arrays, with N nestings for N dimensions, and the 1st dimension is always the innermost nested JSON array (column-major order).
  • AbstractDict/NamedTuple/structs => JSON object: if a value doesn't fall into any of the above categories, it is output as a JSON object. StructUtils.applyeach is called, which has appropriate implementations for AbstractDict, NamedTuple, and structs, where field names => values are iterated over. Field names can be output with an alternative name via field tag overload, like field::Type &(json=(name="alternative_name",),)

If an object is not JSON-serializable, an override for JSON.lower can be defined to convert it to a JSON-serializable object. Some default lower defintions are defined in JSON itself, for example:

  • StructUtils.lower(::Missing) = nothing
  • StructUtils.lower(x::Symbol) = String(x)
  • StructUtils.lower(x::Union{Enum, AbstractChar, VersionNumber, Cstring, Cwstring, UUID, Dates.TimeType}) = string(x)
  • StructUtils.lower(x::Regex) = x.pattern

These allow common Base/stdlib types to be serialized in an expected format.

Circular references are tracked automatically and cycles are broken by writing null for any children references.

For pre-formatted JSON data as a String, use JSONText(json) to write the string out as-is.

For AbstractDict objects with non-string keys, StructUtils.lowerkey will be called before serializing. This allows aggregate or other types of dict keys to be converted to an appropriate string representation. See StructUtils.liftkey for the reverse operation, which is called when parsing JSON data back into a dict type.

NOTE: JSON.json should not be overloaded directly by custom types as this isn't robust for various output options (IO, String, etc.) nor recursive situations. Types should define an appropriate JSON.lower definition instead.

NOTE: JSON.json(str, indent::Integer) is special-cased for backwards compatibility with pre-1.0 JSON.jl, as this typically would mean "write out the indent integer to file str". As writing out a single integer to a file is extremely rare, it was decided to keep the pre-1.0 behavior for compatibility reasons.

Examples:

using Dates

abstract type AbstractMonster end

struct Dracula <: AbstractMonster
    num_victims::Int
end

struct Werewolf <: AbstractMonster
    witching_hour::DateTime
end

struct Percent <: Number
    value::Float64
end

JSON.lower(x::Percent) = x.value
StructUtils.lowerkey(x::Percent) = string(x.value)

@noarg mutable struct FrankenStruct
    id::Int
    name::String # no default to show serialization of an undefined field
    address::Union{Nothing, String} = nothing
    rate::Union{Missing, Float64} = missing
    type::Symbol = :a &(json=(name="franken_type",),)
    notsure::Any = JSON.Object("key" => "value")
    monster::AbstractMonster = Dracula(10) &(json=(lower=x -> x isa Dracula ? (monster_type="vampire", num_victims=x.num_victims) : (monster_type="werewolf", witching_hour=x.witching_hour),),)
    percent::Percent = Percent(0.5)
    birthdate::Date = Date(2025, 1, 1) &(json=(dateformat="yyyy/mm/dd",),)
    percentages::Dict{Percent, Int} = Dict{Percent, Int}(Percent(0.0) => 0, Percent(1.0) => 1)
    json_properties::JSONText = JSONText("{"key": "value"}")
    matrix::Matrix{Float64} = [1.0 2.0; 3.0 4.0]
    extra_field::Any = nothing &(json=(ignore=true,),)
end

franken = FrankenStruct()
franken.id = 1

json = JSON.json(franken; omit_null=false)
# "{"id":1,"name":null,"address":null,"rate":null,"franken_type":"a","notsure":{"key":"value"},"monster":{"monster_type":"vampire","num_victims":10},"percent":0.5,"birthdate":"2025/01/01","percentages":{"1.0":1,"0.0":0},"json_properties":{"key": "value"},"matrix":[[1.0,3.0],[2.0,4.0]]}"

A few comments on the JSON produced in the example above:

  • The name field was #undef, and thus was serialized as null.
  • The address and rate fields were nothing and missing, respectively, and thus were serialized as null.
  • The type field has a name field tag, so the JSON key for this field is franken_type instead of type.
  • The notsure field is a JSON.Object, so it is serialized as a JSON object.
  • The monster field is a AbstractMonster, which is a custom type. It has a lower field tag that specifies how the value of this field specifically (not all AbstractMonster) should be serialized
  • The percent field is a Percent, which is a custom type. It has a lower method that specifies how Percent values should be serialized
  • The birthdate field has a dateformat field tag, so the value follows the format (yyyy/mm/dd) instead of the default date ISO format (yyyy-mm-dd)
  • The percentages field is a Dict{Percent, Int}, which is a custom type. It has a lowerkey method that specifies how Percent keys should be serialized as strings
  • The json_properties field is a JSONText, so the JSONText value is serialized as-is
  • The matrix field is a Matrix{Float64}, which is a custom type. It is serialized as a JSON array, with the first dimension being the innermost nested JSON array (column-major order)
  • The extra_field field has a ignore field tag, so it is skipped when serializing
source
JSON.tostringMethod
JSON.tostring(x)

Overloadable function that allows non-Integer Number types to convert themselves to a String that is then used when serializing x to JSON. Note that if the result of tostring is not a valid JSON number, it will be serialized as a JSON string, with double quotes around it.

An example overload would look something like:

JSON.tostring(x::MyDecimal) = string(x)
source
JSON.@omit_emptyMacro
@omit_empty struct T ...
@omit_empty T

Convenience macro to set omit_empty(::Type{T}) to true for the struct T. Can be used in three ways:

  1. In front of a struct definition: @omit_empty struct T ... end
  2. Applied to an existing struct name: @omit_empty T
  3. Chained with other macros: @omit_empty @other_macro struct T ... end
source
JSON.@omit_nullMacro
@omit_null struct T ...
@omit_null T

Convenience macro to set omit_null(::Type{T}) to true for the struct T. Can be used in three ways:

  1. In front of a struct definition: @omit_null struct T ... end
  2. Applied to an existing struct name: @omit_null T
  3. Chained with other macros: @omit_null @defaults struct T ... end

The macro automatically handles complex macro expansions by walking the expression tree to find struct definitions, making it compatible with macros like StructUtils.@defaults.

Examples

# Method 1: Struct annotation
@omit_null struct Person
    name::String
    email::Union{Nothing, String}
end

# Method 2: Apply to existing struct
struct User
    id::Int
    profile::Union{Nothing, String}
end
@omit_null User

# Method 3: Chain with @defaults
@omit_null @defaults struct Employee
    name::String = "Anonymous"
    manager::Union{Nothing, String} = nothing
end
source