The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
In this short tutorial, we’ll make use of the following functions for the examples:
make_xxx() to make different types of protobufs for attributes, nodes, graphs, etc.check() method that can check whether a protobuf in a particular type is valid.print_readable() method that can print out the human-readable representation of the proto object.For detailed definitions of each type of ONNX protobufs, please checkout ONNX intermediate representation spec. A list of available operators, e.g. FC or Relu used in the following examples to define the nodes, can be found here.
Define a node protobuf and check whether it’s valid:
library(onnx)
node_def <- make_node("Relu", list("X"), list("Y"))
check(node_def)> node_def
input: "X"
output: "Y"
op_type: "Relu"Define an attribute protobuf and check whether it’s valid:
attr_def <- make_attribute("this_is_an_int", 123L)
check(attr_def)> attr_def
name: "this_is_an_int"
i: 123
type: INTDefine a graph protobuf and check whether it’s valid:
graph_def <- make_graph(
  nodes = list(
    make_node("FC", list("X", "W1", "B1"), list("H1")),
    make_node("Relu", list("H1"), list("R1")),
    make_node("FC", list("R1", "W2", "B2"), list("Y"))
  ),
  name = "MLP",
  inputs = list(
    make_tensor_value_info('X' , onnx$TensorProto$FLOAT, list(1L)),
    make_tensor_value_info('W1', onnx$TensorProto$FLOAT, list(1L)),
    make_tensor_value_info('B1', onnx$TensorProto$FLOAT, list(1L)),
    make_tensor_value_info('W2', onnx$TensorProto$FLOAT, list(1L)),
    make_tensor_value_info('B2', onnx$TensorProto$FLOAT, list(1L))
  ),
  outputs = list(
    make_tensor_value_info('Y', onnx$TensorProto$FLOAT, list(1L))
  )
)
check(graph_def)You can use print_readable() to print out the human-readable representation of the graph definition:
> print_readable(graph_def)
graph MLP (
  %X[FLOAT, 1]
  %W1[FLOAT, 1]
  %B1[FLOAT, 1]
  %W2[FLOAT, 1]
  %B2[FLOAT, 1]
) {
  %H1 = FC(%X, %W1, %B1)
  %R1 = Relu(%H1)
  %Y = FC(%R1, %W2, %B2)
  return %Y
}Or simply print it out to see the detailed graph definition containing nodes, inputs, and outputs:
> graph_def
node {
  input: "X"
  input: "W1"
  input: "B1"
  output: "H1"
  op_type: "FC"
}
node {
  input: "H1"
  output: "R1"
  op_type: "Relu"
}
node {
  input: "R1"
  input: "W2"
  input: "B2"
  output: "Y"
  op_type: "FC"
}
name: "MLP"
input {
  name: "X"
  type {
    tensor_type {
      elem_type: FLOAT
      shape {
        dim {
          dim_value: 1
        }
      }
    }
  }
}
input {
  name: "W1"
  type {
    tensor_type {
      elem_type: FLOAT
      shape {
        dim {
          dim_value: 1
        }
      }
    }
  }
}
input {
  name: "B1"
  type {
    tensor_type {
      elem_type: FLOAT
      shape {
        dim {
          dim_value: 1
        }
      }
    }
  }
}
input {
  name: "W2"
  type {
    tensor_type {
      elem_type: FLOAT
      shape {
        dim {
          dim_value: 1
        }
      }
    }
  }
}
input {
  name: "B2"
  type {
    tensor_type {
      elem_type: FLOAT
      shape {
        dim {
          dim_value: 1
        }
      }
    }
  }
}
output {
  name: "Y"
  type {
    tensor_type {
      elem_type: FLOAT
      shape {
        dim {
          dim_value: 1
        }
      }
    }
  }
}These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.