Oracle Database API Mongodb Oracle23
Oracle Database API Mongodb Oracle23
F44905-13
May 2024
Oracle Database Oracle Database API for MongoDB,
F44905-13
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software, software documentation, data (as defined in the Federal Acquisition Regulation), or related
documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S.
Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software,
any programs embedded, installed, or activated on delivered hardware, and modifications of such programs)
and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end
users are "commercial computer software," "commercial computer software documentation," or "limited rights
data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation
of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated
software, any programs embedded, installed, or activated on delivered hardware, and modifications of such
programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and
limitations specified in the license contained in the applicable contract. The terms governing the U.S.
Government's use of Oracle cloud services are defined by the applicable contract for such services. No other
rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle®, Java, MySQL and NetSuite are registered trademarks of Oracle and/or its affiliates. Other names
may be trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc,
and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
Preface
Audience viii
Documentation Accessibility viii
Related Resources viii
Conventions viii
iii
3.7 Indexes and Index Properties 3-35
Index
iv
List of Examples
1-1 Creating JSON Duality View RACE_DV Using GraphQL 1-7
2-1 Indexing a Singleton Scalar Field Using the JSON Page of Database Actions 2-2
2-2 Indexing a Singleton Scalar Field Using SODA 2-4
2-3 Indexing a Singleton Scalar Field Using SQL 2-4
2-4 Creating a Multivalue Index For Fields Within Elements of an Array 2-5
2-5 Creating a Materialized View And an Index For Fields Within Elements of an Array 2-5
2-6 Migrate JSON Data to Oracle Database Using mongoexport and mongoimport 2-10
2-7 Loading JSON Data Into a Collection Using DBMS_CLOUD.COPY_COLLECTION 2-11
2-8 Using SQL Code Instead of MongoDB Aggregation Pipeline Code 2-13
2-9 Creating a Collection in One Schema and Mapping a Collection To It in Another Schema 2-19
3-1 Result for SELECT Query that Returns a Single Column of JSON Data 3-32
3-2 Result for SELECT Query that Returns Data from Multiple Columns (Any Types) 3-33
3-3 Result for a DDL Statement — No Rows Are Modified 3-34
3-4 Result for a DML Statement That Modifies One Row 3-34
3-5 Result for a DML Statement That Modifies Three Rows 3-34
3-6 Result for a DML Statement That Modifies Two Rows 3-34
v
List of Tables
1-1 Application-User Terms 1-3
2-1 Conversion of BSON Field _id Value To Column ID VARCHAR2 Value 2-16
2-2 JSON Scalar Type Conversions: BSON to OSON Format 2-17
3-1 Administration Commands 3-2
3-2 Aggregation Commands 3-3
3-3 Authentication Commands 3-3
3-4 Diagnostic Commands 3-4
3-5 Query and Write Operation Commands 3-5
3-6 Role Management Commands 3-7
3-7 Replication Commands 3-7
3-8 Sessions Commands 3-8
3-9 User Management Commands 3-8
3-10 Sharding Commands 3-9
3-11 Array Query Operators 3-10
3-12 Bitwise Query Operators 3-10
3-13 Comment Query Operator 3-11
3-14 Comparison Query Operators 3-11
3-15 Element Query Operators 3-11
3-16 Evaluation Query Operators 3-12
3-17 Geospatial Query Operators 3-12
3-18 Logical Query Operators 3-12
3-19 Projection Operators 3-13
3-20 Array Update Operators 3-13
3-21 Bitwise Update Operator 3-14
3-22 Field Update Operators 3-14
3-23 Modifier Update Operators 3-14
3-24 Cursor Methods 3-15
3-25 Arithmetic Expression Operators 3-16
3-26 Array Expression Operators 3-16
3-27 Boolean Expression Operators 3-17
3-28 Comparison Expression Operators 3-18
3-29 Conditional Expression Operators 3-18
3-30 Date Expression Operators 3-18
3-31 Literal Expression Operator ($literal) 3-19
3-32 Object Expression Operators 3-19
vi
3-33 Set Expression Operators 3-20
3-34 String Expression Operators 3-20
3-35 Text Expression Operator ($meta) 3-21
3-36 Type Expression Operators 3-21
3-37 Stage Operators 3-21
3-38 Accumulator Expression Operators 3-23
3-39 Variable Expression Operator 3-23
3-40 System Variables 3-23
3-41 Miscellaneous Operators 3-24
3-42 $sql Fields 3-27
3-43 Fields of binds Object 3-30
3-44 Field datatype Values 3-31
3-45 SELECT: Mappings of Non-JSON SQL Columns to BSON 3-32
3-46 Data Types 3-35
3-47 Indexes 3-35
3-48 Index Properties 3-36
vii
Preface
Preface
This document provides a conceptual overview of Oracle Database API for MongoDB.
• Audience
• Documentation Accessibility
• Related Resources
• Conventions
Audience
This document is intended for users of Oracle Database API for MongoDB.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=docacc.
Related Resources
For more information, see these Oracle resources:
• Oracle Database API for MongoDB at Oracle Help Center for complete information
about this product
• Autonomous JSON Database
• Oracle Database JSON Developer’s Guide
• Oracle as a Document Store for general information about using JSON data in
Oracle Database, including with Simple Oracle Document Access (SODA) and
Oracle Database API for MongoDB
Conventions
The following text conventions are used in this document:
viii
Preface
Convention Meaning
boldface Boldface type indicates graphical user interface elements associated with an
action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for which
you supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code in
examples, text that appears on the screen, or text that you enter.
ix
1
Overview of Oracle Database API for
MongoDB
Oracle Database API for MongoDB lets applications interact with collections of JSON
documents in Oracle Database using MongoDB commands.
Oracle Database API for MongoDB is provided as part of Oracle Autonomous Database
Serverless. You can enable it there using the Oracle Cloud Infrastructure Console. See
Configure Access for MongoDB in Using Oracle Autonomous Database Serverless.
If you have release 22.3 or later of Oracle REST Data Services (ORDS), then you can use
the MongoDB API with any Oracle database, release 21c or later, as well as with any Oracle
Autonomous Database, release 19c (serverless, dedicated, and cloud@customer). See
Oracle API for MongoDB Support in Oracle REST Data Services Installation and
Configuration Guide for information about enabling the API.
See Also:
Using the Oracle Database API for MongoDB in Using Oracle Autonomous
Database Serverless for information about using an Autonomous Database
(including an Autonomous JSON Database) with Oracle Database API for
MongoDB. This covers configuring the database for use with the API, including for
security and connection.
1-1
Chapter 1
Purpose of Oracle Database API for MongoDB
1.2 Tools and Drivers for Oracle Database API for MongoDB
Oracle Database API for MongoDB supports a variety of MongoDB tools and drivers.
Oracle recommends that you use the following tool and driver versions, or higher, with
support for load-balanced connections.
• C 1.19.0
• C# 2.13.0
• Compass 1.28.1
• Database Tools 100.5.0 (includes mongoexport, mongorestore, and mongodump)
1-2
Chapter 1
Terms and Concepts: MongoDB and Oracle Database
• Go 1.6.0
• Java 4.3.0
• MongoSH 0.15.6
• Node.js driver 4.1.0
• PyMongo 3.12.0 (for Python language)
• Ruby 2.16.0
• Rust 2.1.0
You can download these drivers from https://www.mongodb.com/docs/drivers/.
Note:
Examples in this documentation of input to, and output from, Oracle Database API
for MongoDB use the syntax of shell mongosh.
Term Description
Database A set of collections.
On Oracle Database this corresponds to a database schema.
Because of this possible confusion over use of the word database, in this
documentation that word is used for Oracle Database, and the term schema, or
database schema, is used for what MongoDB calls a "database".
User For log-in purposes, a user of Oracle Database API for MongoDB is an Oracle
Database user, which is also called a database schema (see previous).
To use the collections in a given schema ("database") , you log in with the Oracle
Database API for MongoDB using the MongoDB PLAIN $external mechanism and
providing the credentials for that schema.
A root user, that is, a user who has MongoDB role root, can create additional
database schemas. And a root user can use the collections of any schema without
needing to log in separately for that schema.
Collection A collection contains a set of documents.
A collection name is unique for a given database schema: Different collections can
have the same name if they are in different schemas.
On Oracle Database, a table or a view underlies a collection. The table name is
derived from the collection name and is typically the same. (Exceptions include
collection names that use words reserved by Oracle Database.) Typically all
documents in a collection are JSON documents.
1-3
Chapter 1
Terms and Concepts: MongoDB and Oracle Database
Term Description
Document The basic unit of storage for data in a collection.
On Oracle Database a document corresponds roughly to a row in the table or view that
underlies the collection.
A document is typically a JSON document, that is, it contains only JSON data. On
Oracle Autonomous Database a document is always a JSON document.
On Oracle Autonomous Database the table column used to store documents is named
data.
Primary Key On Oracle Database a primary key is used to uniquely identify a table or view row.
MongoDB uses a unique _id field in a document to identify the document. On Oracle
Database the primary key for a JSON document is stored in a column named id. Its
value is automatically set to the value of the document's _id field. See Document Key:
Differences and Conversion (Oracle Database Prior to 23ai).
Query A JSON object that is sent by an application client to the server (Oracle Database), to
Expression query documents of a collection.
The object can contain query operator fields, whose names start with $. The
operators are interpreted, and their operations are invoked to act on the collection. The
server returns the action results to the client.
Query expressions are typically used to query a collection, but they can also be used
to project or update data in documents.
Oracle Database API for MongoDB translates query expressions into SQL (Structured
Query Language) queries.
Index Indexes enhance performance when acting on collections (querying, inserting,
updating, and deleting documents).
An index name is unique for a given database schema: Different indexes can have the
same name if they are in different schemas.
Note:
If Oracle Database parameter compatible is less than
23 then MongoDB commands to create or drop indexes
are ignored by Oracle Database API for MongoDB. You
must instead create Oracle Database indexes that are
relevant for your JSON data.
Pipeline MongoDB aggregation operations chain multiple operations together, invoking them
sequentially as a pipeline.
If Oracle Database parameter compatible is less than 23 then MongoDB aggregation
pipelines are not used; Oracle Database API for MongoDB carries out aggregation
operations differently. See MongoDB Aggregation Pipeline Support.
Related Topics
• MongoDB Documents and Oracle Database
Presented here is the relationship between a JSON document used by MongoDB
and the same content as a JSON document stored in, and used by, Oracle
Database.
1-4
Chapter 1
Default Naming of a Collection Table
See Also:
Note:
Oracle recommends that you do not use dollar-sign characters ($) or
number-sign characters (#) in Oracle identifier names.
For example:
• Collection names "col" and "COL" both result in a table named "COL". When used in
SQL, the table name is interpreted case-insensitively, so it need not be enclosed in
double quotation marks (").
1-5
Chapter 1
Using the Mongo DB API with JSON-Relational Duality Views
• Collection name "myCol" results in a table named "myCol". When used in SQL,
the table name is interpreted case-sensitively, so it must be enclosed in double
quotation marks (").
Creating JSON Duality Views for Use With the MongoDB API
You cannot create a JSON-relational view using the MongoDB API. You can use SQL
statement CREATE JSON RELATIONAL DUALITY VIEW to do that.
All duality views are compatible with the MongoDB API. They always have field _id as
their document identifier. The value of field _id specifies the document fields whose
values are the primary-key columns of the root table that underlies the duality view.
• If there is only one primary-key column, then you use that column as the value of
field _id when you define the duality view. For example: _id : race_id, as in
Example 1-1.
1-6
Chapter 1
Using the Mongo DB API with JSON-Relational Duality Views
• If there are multiple primary-key columns, then you use an object as the value of field _id
when you define the view. The members of the object specify document fields whose
values are the primary-key columns. For example, suppose you have a car-racing duality
view with two primary-key columns, race_id and race_year, which together uniquely
identify a root-table row, but neither of which does so alone. This _id field in the duality
view definition maps document fields raceId and year to primary-key columns race_id
and race_year, respectively:
If there is only one primary-key column, you can nevertheless use an object value for
_id, if you like. Doing so lets you provide a meaningful field name. For example, here the
single primary-key column, race_id, provides the value of field raceId as well as the
value of field _id:
The value(s) provided by field _id for the primary key column(s) it maps to must of course be
insertable into those columns, which means that their data types must be compatible with the
column types. For example, if field _id maps to a single primary-key column that is of SQL
type NUMBER, then the _id value of a document you insert must be numeric. Otherwise, an
error is raised for the insertion attempt.
If you don't explicitly include an _id field in a document that you insert, then it is added
automatically, with an ObjectId value. (You can also explicitly use an ObjectId value in an
_id field.) An ObjectId value can only be used for a field that the duality view maps to a
column of SQL type RAW.
This definition is the same as the one in Creating Duality View RACE_DV Using GraphQL in
JSON-Relational Duality Developer's Guide. See that documentation for similar duality view
creations for driver and race documents. The SQL code in this example embeds Oracle
GraphQL code. Alternatively you can use only SQL code for the definition, as in Creating
Duality View RACE_DV, With Unnested Driver Information Using SQL.
1-7
Chapter 1
Using the Mongo DB API with JSON-Relational Duality Views
This duality view supports JSON documents where the race objects look like this —
they contain a result field whose value is an array of objects that specify the drivers
and their resulting positions in the given race:
{"_id" : 201,
"name" : "Bahrain Grand Prix",
"laps" : 57,
"date" : "2022-03-20T00:00:00",
"podium" : {...},
"result" : [ {"driverRaceMapId" : 3,
"position" : 1,
"driverId" : 103,
"name" : "Charles Leclerc"},... ]}
The value of document identifier field _id is taken from the single primary-key column,
race_id of the root table, race. For example, the document identified by the _id field
whose value is 201 is generated from the row of data that has 201 in primary-key
column race_id of the root table (race) underlying the duality view.
Generation of the documents supported by the view automatically joins data from
columns driver_race_map_id, position and driver_id from table driver_race_map,
and column name from table driver.
The annotations (GraphQL directives) @insert, @update, and @delete are used to
specify that applications can insert, update, and delete documents supported by the
view, respectively, but that they can only perform update operations on the driver field
of the documents (a driver cannot be inserted or deleted when you modify a race
document) and you cannot update the laps field (you cannot change the number of
laps when you update a race document).
The @nocheck annotation applied to column podium specifies that updating field podium
in a race document does not contribute to checking the state/version of the document
(its ETAG value).
See Also:
1-8
2
Develop Applications with Oracle Database
API for MongoDB
Considerations when developing or migrating applications — a combination of (1) how-to
information and (2) descriptions of differences and possible adjustments.
• Indexing and Performance Tuning
Oracle Database offers multiple technologies to accelerate queries over JSON data,
including indexes, materialized views, in-memory column storage, and Exadata storage-
cell pushdown. Which performance-tuning approaches you take depend on the needs of
your application.
• Users, Authentication, and Authorization
Oracle Database security differs significantly from that of MongoDB. The security model
of Oracle Database API for MongoDB is described: the creation of users, their
authentication, and their authorization to perform different operations.
• Migrate Application Data from MongoDB to Oracle Database
Some ways to export your JSON data from MongoDB and then import it into Oracle
Database are described. Migration considerations are presented.
• MongoDB Aggregation Pipeline Support
Oracle Database API for MongoDB supports MongoDB aggregation pipelines, that is,
MongoDB command aggregate. It lets you use pipeline code to execute a query as a
sequence of operations. You can also use SQL as a declarative alternative to this
procedural approach.
• MongoDB Documents and Oracle Database
Presented here is the relationship between a JSON document used by MongoDB and the
same content as a JSON document stored in, and used by, Oracle Database.
• Other Differences Between MongoDB and Oracle Database
Various differences between MongoDB and Oracle Database are described. These
differences are generally not covered in other topics. Consider these differences when
you migrate an application to Oracle Database or you develop a new application for
Oracle Database that uses MongoDB commands.
• Accessing Collections Owned By Other Users (Database Schemas)
You can directly access a MongoDB API collection owned by another user (database
schema) if you log into that schema. You can indirectly access a collection owned by
another user, without logging into that schema, if that collection has been mapped to a
collection in your schema.
2-1
Chapter 2
Indexing and Performance Tuning
Oracle indexes. If parameter compatible parameter is less than 23, then such
MongoDB index operations are not supported; they are ignored.
Regardless of your database release you can create whatever Oracle Database
indexes you need directly, using (1) the JSON Page of Using Oracle Database Actions
(see Creating Indexes for JSON Collections), (2) Simple Oracle Document Access
(SODA), or (3) SQL — see Indexes for JSON Data in Oracle Database JSON
Developer’s Guide. Using the JSON page is perhaps the easiest approach to indexing
JSON data.
Note:
MongoDB allows different collections in the same "database" to have
indexes of the same name. This is not allowed in Oracle Database — the
name of an index must be unique across all collections of a given database
schema ("database").
{ "PONumber" : 1600,
"User" : "ABULL",
"LineItems" : [{ "Part" : { "Description" : "One Magic Christmas",
"UnitPrice" : 19.95,
"UPCCode" : 13131092899 },
"Quantity" : 9.0 },
{ "Part" : { "Description" : "Lethal Weapon",
"UnitPrice" : 19.95,
"UPCCode" : 85391628927
},
"Quantity" : 5.0 } ]}
Two important use cases are (1) indexing a singleton scalar field, that is, a field that
occurs only once in a document (2) indexing a scalar field in objects within the
elements of an array. Indexing the value of field PONumber is an example of the first
case. Indexing the value of field UPCCode is an example of the second case.
Example 2-1, Example 2-2, and Example 2-3 illustrate the first case. Example 2-5
illustrates the second case.
You can also index GeoJSON (spatial) data, using a function-based SQL index that
returns SDO_GEOMETRY data. And for all JSON data you can create a JSON search
index, and then perform full-text queries using SQL/JSON condition
json_textcontains.
Example 2-1 Indexing a Singleton Scalar Field Using the JSON Page of
Database Actions
To create an index for field PONumber using the JSON Page, do the following.
1. Right-click the collection name (orders) and select Indexes from the popup menu.
2-2
Chapter 2
Indexing and Performance Tuning
1 MongoDB calls a composite index a compound index. A composite index is also sometimes called a concatenated index.
2-3
Chapter 2
Indexing and Performance Tuning
{ "name" : "poNumIdx",
"unique" : true,
"fields" : [ { "path" : "PONumber",
"dataType" : "NUMBER",
"order" : "ASC" } ] }
The code uses ERROR ON ERROR handling, to raise an error if a document has no
PONumber field or it has more than one.
Item method numberOnly() is used in the path expression that identifies the field to
index, to ensure that the field value is numeric.
Method numberOnly() is used instead of method number(), because number() allows
also for conversion of non-numeric fields to numbers. For example, number() converts
a PONumber string value of "42" to the number 42.
2-4
Chapter 2
Indexing and Performance Tuning
Other such "only" item methods, which similarly provide strict type checking, include
stringOnly(), dateTimeOnly(), and binaryOnly(), for strings, dates, and binary values,
respectively.
See Also:
SQL/JSON Path Expression Item Methods in Oracle Database JSON Developer’s
Guide
Example 2-4 Creating a Multivalue Index For Fields Within Elements of an Array
Starting with Oracle Database 21c you can create a multivalue index for the values of fields
that can occur multiple times in a document because they are contained in objects within an
array (objects as elements or at lower levels within elements).
This example creates a multivalue index on collection orders for values of field UPCCode. It
example uses item method numberOnly(), so it applies only to numeric UPCCode fields.
See Also:
Creating Multivalue Function-Based Indexes for JSON_EXISTS in Oracle Database
JSON Developer’s Guide
Example 2-5 Creating a Materialized View And an Index For Fields Within Elements of
an Array
Prior to Oracle Database 21c you cannot create a multivalue index for fields such as UPCCode,
which can occur multiple times in a document because they are contained in objects within
an array (objects as elements or at lower levels within elements).
You can instead, as in this example, create a materialized view that extracts the data you
want to index, and then create a function-based index on that view data.
This example creates materialized view mv_UPCCode with column upccode, which is a
projection of field UPCCode from within the Part object in array LineItems of column data of
table orders. It then creates index mv_UPCCode_idx on column upccode of the materialized
view (mv_UPCCode).
2-5
Chapter 2
Indexing and Performance Tuning
json_table(data, '$.LineItems[*]'
ERROR ON ERROR NULL ON EMPTY
COLUMNS (upccode NUMBER PATH '$.Part.UPCCode')) jt;
The query optimizer is responsible for finding the most efficient method for a SQL
statement to access requested data. In particular, it determines whether to use an
index that applies to the queried data, and which index to use if more than one is
relevant. In most cases the best guideline is to rely on the optimizer.
In some cases, however, you might prefer to specify that a particular index be picked
up for a given query. You can do this with a MongoDB hint that names the index.
(Oracle does not support the use of MongoDB index specifications — just provide the
index name.)
For example, this query uses index poNumIdx on collection orders, created in
Example 2-1.
db.orders.find({"PONumber":1600}).hint("poNumIdx")
Alternatively, you can specify an index to use by passing an Oracle SQL hint, using
query-by-example (QBE) operator $native, which is an Oracle extension to the
MongoDB hint syntax.
The argument for $native has the same syntax as a SQL hint string (that is, the actual
hint text, without the enclosing SQL comment syntax /*+...*/). You can pass any
SQL hint using $native. In particular, you can turn on monitoring for the current SQL
statement using hint MONITOR. This code does that for a find() query:
db.orders.find().hint({"$native":"MONITOR"})
Related Topics
• MongoDB Aggregation Pipeline Support
Oracle Database API for MongoDB supports MongoDB aggregation pipelines, that
is, MongoDB command aggregate. It lets you use pipeline code to execute a
query as a sequence of operations. You can also use SQL as a declarative
alternative to this procedural approach.
2-6
Chapter 2
Users, Authentication, and Authorization
See Also:
For MongoDB, a "database" is a set of collections. For Oracle Database API for MongoDB,
this corresponds to an Oracle Database schema.
2-7
Chapter 2
Users, Authentication, and Authorization
Note:
Using Oracle API for MongoDB to drop a "database" does not drop the
underlying database schema. Instead, it drops all collections within the
schema.
An administrative user can drop a schema using SQL (for example, using
Database Actions with an Autonomous Oracle Database).
For the API, a username must be a database schema name. The name is case-
insensitive, it cannot start with a nonalphabetic character (including a numeral), and it
must be provided with a secure password.
Normally, a user of the API can only perform operations within its schema (the
username is the schema name). Examples of such operations include creating new
collections, reading and writing documents, and creating indexes.
When an administrative user tries to insert data into a database schema (user) that
does not exist, that schema is created automatically as a schema-only account, which
means that it does not have a password and it cannot be logged into. The new
schema is granted these privileges: SODA_APP, CREATE SESSION, CREATE TABLE, CREATE
VIEW, CREATE SEQUENCE, CREATE PROCEDURE, and CREATE JOB. The schema is also
given an unlimited tablespace quota, and is enabled for using Oracle REST Data
Services (ORDS).
For an ordinary user of the API, a MongoDB shell command (such as use
<database>) that switches from the current MongoDB database to another one is
typically not supported — switching to another database schema raises an error.
However, an administrative user, which is one that has all of the following privileges,
can create new users (database schemas), and can access any schema as any user:
CREATE USER, ALTER USER, DROP USER. User admin is a predefined administrative user.
Related Topics
• Terms and Concepts: MongoDB and Oracle Database
Some application-user terms and concepts used by MongoDB are presented,
together with description of their relation to Oracle Database..
2-8
Chapter 2
Migrate Application Data from MongoDB to Oracle Database
See Also:
2-9
Chapter 2
Migrate Application Data from MongoDB to Oracle Database
This displays a popup dialog box where you browse to and import the JSON file
containing your collection data. See MongoDB Compass.
• After exporting JSON data to your file system, import it to the Oracle Cloud Object
Store, then load it from there into a collection using PL/SQL procedure
DBMS_CLOUD.copy_collection. Example 2-7 illustrates this.
This processes the data in parallel, so it is typically faster than mongoimport.
• Write a program that reads JSON documents from a connection to MongoDB and
writes them to a connection to Oracle Database.
Example 2-6 Migrate JSON Data to Oracle Database Using mongoexport and mongoimport
This example exports collection sales from MongoDB to file-system file sales.json. It
then imports the data from that file to Oracle Database as collection sales. The user is
connected to host <host> as database schema <user> with password <password>.
mongoimport 'mongodb://<user>:<password>@<host>:27017/<user>?
authMechanism=PLAIN&authSource=$external&ssl=true' --collection=sales --
file=sales.json
2-10
Chapter 2
Migrate Application Data from MongoDB to Oracle Database
Note:
Use URI percent-encoding to replace any reserved characters in your connection-
string URI — in particular, characters in your username and password. These are
the reserved characters and their percent encodings:
! # $ % & ' ( ) * +
%21 %23 %24 %25 %26 %27 %28 %29 %2A %2B
, / : ; = ? @ [ ]
%2C %2F %3A %3B %3D %3F %40 %5B %5D
For example, if your username is RUTH and your password is @least1/2#? then your
MongoDB connection string to server <server> might look like this:
'mongodb://RUTH:%40least1%2F2%23%3F@<server>:27017/ruth/ ...'
Depending on the tools or drivers you use, you might be able to provide a
username and password as separate parameters, instead of as part of a URI
connection string. In that case you likely won't need to encode any reserved
characters they contain.
See also:
• Percent Encoding - Reserved Characters
• Uniform Resource Identifier (URI): Generic Syntax
See Also:
Using the Oracle Database API for MongoDB in Using Oracle Autonomous
Database Serverless for information about using an Autonomous Database
(including an Autonomous JSON Database) with Oracle Database API for
MongoDB. This covers configuring the database for use with the API, including for
security and connection.
The value passed as copy_collection parameter FORMAT is a JSON object with fields
recorddelimiter and type:
2-11
Chapter 2
MongoDB Aggregation Pipeline Support
• Field recorddelimiter specifies that records in the input data are separated by
newline characters. A JSON document is created for each record, that is, for each
line in the newline-delimited input data.
• Field type specifies that the input JSON data can contain EJSON extended
objects, and that these should be interpreted.
See DBMS_CLOUD Package Format Options in Using Oracle Autonomous Database
Serverless for information about parameter FORMAT.
BEGIN
DBMS_CLOUD.copy_collection(
collection_name => 'newCollection',
file_uri_list => 'https://objectstorage.../data.json',
format => json_object(
'recorddelimiter' : '''\n''',
'type' : 'ejson'));
END;
/
Related Topics
• Users, Authentication, and Authorization
Oracle Database security differs significantly from that of MongoDB. The security
model of Oracle Database API for MongoDB is described: the creation of users,
their authentication, and their authorization to perform different operations.
• Terms and Concepts: MongoDB and Oracle Database
Some application-user terms and concepts used by MongoDB are presented,
together with description of their relation to Oracle Database..
See Also:
2-12
Chapter 2
MongoDB Aggregation Pipeline Support
cost estimate, and so on. In other words, you specify what you want done, and the optimizer,
not you, determines how it should be done.
Oracle Database SQL support of JSON data includes operating on documents and
collections, as well as joining JSON and non-JSON data (relational, spatial, graph, …). As a
user of Oracle Database API for MongoDB you can apply SQL directly to JSON data without
worrying about manually specifying and sequencing any specific operations.
But if you do use MongoDB aggregation pipeline code then the MongoDB API automatically
translates the pipeline stages and operations into equivalent SQL code, and the optimizer
picks the best execution plan possible. The API supports a subset of the MongoDB
aggregation pipeline stages and operations — see Aggregation Pipeline Operators for
details.
Unlike MongoDB, Oracle Database does not limit the size of the data to be sorted, joined, or
grouped. You can use it for reporting or analytical work that spans millions of documents
across any number of collections.
You can use Oracle Database simplified dot notation for JSON data, or standard SQL/JSON
functions json_value, json_query, and json_table, to extract values from your JSON data
for reporting or analytic purposes. You can convert relational and other kinds of data
(including spatial and graph data) to JSON data using the SQL/JSON generation functions.
You can join JSON data from multiple tables and collections with a single SQL FROM clause.
A MongoDB aggregation pipeline performs operations on JSON documents from one or more
collections. It's composed of successive stages, each of which performs document operations
and passes the resulting documents to the next stage for further processing. The operations
for any stage can filter the documents passed from the previous stage, transform (update)
them, or even create new documents, for the next stage. Transformation can involve the use
of aggregate operators, also called accumulators, such as $avg (average), which can
combine field values from multiple documents.
Each stage in a pipeline is represented by an aggregation expression, which is a JSON
value. See the MongoDB Aggregation Pipeline documentation for more background.
You can use declarative SQL code to accomplish what you would otherwise use an
aggregation pipeline for. This is particularly relevant if your Oracle Database parameter
compatible is less than 23, in which case most MongoDB aggregation pipelines are not
supported. Example 2-8 illustrates this.
Example 2-8 Using SQL Code Instead of MongoDB Aggregation Pipeline Code
This example calculates average revenues by zip code. It first shows a MongoDB
aggregation pipeline expression to do this; then it shows equivalent SQL code.
MongoDB aggregation pipeline:
This code tells MongoDB how to calculate the result; it specifies the order of execution.
db.sales.aggregate(
[{"$group" : {"_id" : "$address.zip",
"avgRev" : {"$avg" : "$revenue"}}},
{"$sort" : {"avgRev" : -1}}])
SQL:
This code specifies the grouping and order of the output presentation declaratively. It does
not specify how the computation is to be carried out, including the order of execution. It
2-13
Chapter 2
MongoDB Documents and Oracle Database
simply says that the results are to be grouped by zipcode and presented in descending
order of the average revenue figures. The query returns rows of two columns with
scalar values for zipcode (a string) and average revenue (a number).
SELECT s.data.address.zip.string(),
avg(s.data.revenue.number())
FROM sales s
GROUP BY s.data.address.zip.string()
ORDER BY 2 DESC;
The following query is similar, but it provides the result as rows of JSON objects, each
with a string field zip, for the zipcode, and a numeric field avgRev, for the average
revenue. SQL/JSON generation function json_object constructs JSON objects from
the results of evaluating its argument SQL expressions.
Related Topics
• Aggregation Pipeline Operators
Support of MongoDB aggregation pipeline operators is described.
Note:
This topic applies to JSON documents that you migrate from MongoDB and
store in Oracle Database. It does not apply to JSON documents that are
generated/supported by JSON-relational duality views. For information about
MongoDB-compatible duality views see Using the Mongo DB API with
JSON-Relational Duality Views.
You can migrate an existing application and its data from MongoDB to Oracle
Database, or you can develop new applications on Oracle Database, which use the
same or similar data as applications on MongoDB. JSON data in both cases is stored
in documents.
It's helpful to have a general understanding of the differences between the documents
used by MongoDB and those used by Oracle Database. In particular, it helps to
understand what happens to a MongoDB document that you import, to make it usable
with Oracle Database.
2-14
Chapter 2
MongoDB Documents and Oracle Database
Some of the information here presents details that you can ignore if you read this topic just to
get a high-level view. But it's good to be aware of what's involved; you may want to revisit this
at some point.
When you import a collection of MongoDB documents, the key and the content of each
document are converted to forms appropriate for Oracle Database.
A MongoDB document has a native binary JSON format called BSON. An Oracle Database
document has a native binary JSON format called OSON. So one change that's made to your
MongoDB document is to translate its binary format from BSON to OSON. This translation
applies to both the key and the content of a document
Note:
For Oracle Database API for MongoDB, as for MongoDB itself, a stage receives
input, and produces output, in the form of BSON data, that is, binary JSON data in
the MongoDB format.
When you import a collection into Oracle Database prior to 23ai, Oracle Database API for
MongoDB creates id column values from the values of field _id in your MongoDB
documents. MongoDB field _id can have values of several different data types. The Oracle
Database id column that corresponds to that field is always of SQL data type VARCHAR2
(character data; in other words, a string).
The _id field in your imported documents is untouched during import or thereafter. Oracle
Database doesn't use it — it uses column id instead. But it also doesn't change it, so any use
your application might make of that field is still valid. Field _id in your documents is never
changed; even applications cannot change (delete or update) it.
If you need to work with your documents using SQL or Simplified Oracle Document Access
(SODA) then you can directly use column id. You can easily use that primary-key column to
join JSON data with other database data, for instance. The documents that result from
importing from MongoDB are SODA documents (with native binary OSON data).
Be aware of these considerations that result from the separation of document key from
document:
• Though all documents imported from MongoDB will continue to have their _id fields, for
Oracle Database prior to 23ai the documents in a JSON collection need not have an _id
field. And because, for Oracle Database prior to 23ai, a document and its key are
separate, a document other than one imported from MongoDB could have an _id field
that has no relation whatsoever with the document key.
• Because MongoDB allows _id values of different types, and these are all converted to
string values (VARCHAR2), if for some reason your collection has documents with _id
2-15
Chapter 2
MongoDB Documents and Oracle Database
values "123" (JSON string) and 123 (JSON number) then importing the collection
will raise a duplicate-key error, because those values would each be translated as
the same string value for column id.
BSON values of field _id are converted to VARCHAR2 column id values according to
Table 2-1. If an _id field value is any type not listed in the table then it is replaced by a
generated ObjectId value, which is then converted to the id column value.
Table 2-1 Conversion of BSON Field _id Value To Column ID VARCHAR2 Value
• If the input number has no fractional part (it is integral), and if it can be rendered in
40 digits or less, then it is rendered as an integer. If necessary, trailing zeros are
used, to avoid notation with an exponent. For example, 1000000000 is used
instead of 1E+9.
• If the input number has a fractional part, the number is rendered in 40 digits or less
with a decimal point separator. If necessary, zeros are used to avoid notation with
an exponent. For example, 0.00001 is used instead of 1E-5.
• If conversion of the input number would result in a loss of digit precision in the 40-
digit format, the number is instead rendered with an exponent. This can happen for
a number whose absolute value is extremely small or extremely large, even if the
number is integral. For example, 1E100 is used, to avoid a 1 followed by 100 zeros.
In practice, this canonical numeric format means that in most cases the numeric _id
field value results in an obvious, or "pretty" VARCHAR2 value for column id. A format
that uses an exponent is used only when necessary, which generally means
infrequently.
2-16
Chapter 2
MongoDB Documents and Oracle Database
Table 2-2 specifies the type mappings that are applied when converting scalar BSON data to
scalar OSON data. The OSON scalar types used are SQL data types, except as noted. Any
BSON types not listed are not converted; instead, an error is raised when they are
encountered. This includes BSON types regex, and JavaScript.
Related Topics
• Other Differences Between MongoDB and Oracle Database
Various differences between MongoDB and Oracle Database are described. These
differences are generally not covered in other topics. Consider these differences when
you migrate an application to Oracle Database or you develop a new application for
Oracle Database that uses MongoDB commands.
• Users, Authentication, and Authorization
Oracle Database security differs significantly from that of MongoDB. The security model
of Oracle Database API for MongoDB is described: the creation of users, their
authentication, and their authorization to perform different operations.
See Also:
2-17
Chapter 2
Other Differences Between MongoDB and Oracle Database
2-18
Chapter 2
Accessing Collections Owned By Other Users (Database Schemas)
• MongoDB allows different collections in the same "database" to have indexes of the
same name. This is not allowed in Oracle Database — the name of an index must be
unique across all collections of a given database schema ("database").
• The maximum size of a document for MongoDB is 16 MB. The maximum size for Oracle
Database (and thus for the MongoDB API) is 32 MB.
Related Topics
• MongoDB Documents and Oracle Database
Presented here is the relationship between a JSON document used by MongoDB and the
same content as a JSON document stored in, and used by, Oracle Database.
• Users, Authentication, and Authorization
Oracle Database security differs significantly from that of MongoDB. The security model
of Oracle Database API for MongoDB is described: the creation of users, their
authentication, and their authorization to perform different operations.
See Also:
Unicode Collation Algorithm, Unicode® Technical Standard #10
2-19
Chapter 2
Accessing Collections Owned By Other Users (Database Schemas)
User janet then maps a new collection, janet_coll (in schema janet) to collection
john_coll in schema john. (The original and mapped collections need not have
different names, such as john_coll and janet_coll; they could both have the same
name.)
User janet then lists the collections available to schema janet, and reads the content
of mapped collection janet_coll, which is the same as the content of collection
john_coll.
(The commands submitted to mongosh are each a single line (string), but they are
shown here continued across multiple lines for clarity.)
Note:
Examples in this documentation of input to, and output from, Oracle
Database API for MongoDB use the syntax of shell mongosh.
1. When connected to the database as user john, run PL/SQL code to create
collection john_coll backed by table john_coll. The second argument to
create_collection is the metadata needed for a MongoDB-compatible collection.
(The backing table name is derived from the collection name — see Default Naming of
a Collection Table.)
DECLARE
col SODA_COLLECTION_T;
BEGIN
col := DBMS_SODA.create_collection(
'john_coll',
'{"contentColumn" : {"name" : "DATA",
"sqlType" : "BLOB",
"jsonFormat" : "OSON"},
"keyColumn" : {"name" : "ID",
"assignmentMethod" : "EMBEDDED_OID",
"sqlType" : "VARCHAR2"},
"versionColumn" : {"name" : "VERSION", "method" : "UUID"},
"lastModifiedColumn" : {"name" : "LAST_MODIFIED"},
"creationTimeColumn" : {"name" : "CREATED_ON"}}');
END;
2. Connect to the database using shell mongosh as user john, list the collections in that
schema (John's collections), insert a document into collection john_coll, and show
the result of the insertion.
mongosh 'mongodb://john:...
@MQSSYOWMQVGAC1Y-CTEST.adb.us-ashburn-1.oraclecloudapps.com:27017/john
?
authMechanism=PLAIN&authSource=$external&ssl=true&retryWrites=false&loa
dBalanced=true'
2-20
Chapter 2
Accessing Collections Owned By Other Users (Database Schemas)
Output:
john_coll
Output:
3. In schema john, grant user janet access privileges to collection john_coll and its backing
table of the same name, john_coll.
4. When connected to the database as user (schema) janet, Create a new collection
janet_coll in schema janet that's mapped to collection john_coll in schema john.
DECLARE
col SODA_COLLECTION_T;
BEGIN
col := DBMS_SODA.create_collection(
'janet_coll',
'{"schemaName" : "JOHN",
"tableName" : "JOHN_COLL",
"contentColumn" : {"name" : "DATA",
"sqlType" : "BLOB",
"jsonFormat" : "OSON"},
"keyColumn" : {"name" : "ID",
"assignmentMethod" : "EMBEDDED_OID",
"sqlType" : "VARCHAR2"},
"versionColumn" : {"name" : "VERSION", "method" : "UUID"},
"lastModifiedColumn" : {"name" : "LAST_MODIFIED"},
"creationTimeColumn" : {"name" : "CREATED_ON"}}',
DBMS_SODA.CREATE_MODE_MAP);
END;
2-21
Chapter 2
Accessing Collections Owned By Other Users (Database Schemas)
Note:
The schema and table names used in the collection metadata argument
must be as they appear in the data dictionary, which in this case means they
must be uppercase. You can use these queries to obtain the correct schema
and table names for collection <collection> (when connected as the owner
of <collection>):
5. Connect to the database using shell mongosh as user janet, list the available
collections, and show the content of collection janet_coll (which is the same as the
content of John's collection john_coll).
mongosh 'mongodb://janet:...
@MQSSYOWMQVGAC1Y-CTEST.adb.us-ashburn-1.oraclecloudapps.com:27017/janet
?
authMechanism=PLAIN&authSource=$external&ssl=true&retryWrites=false&loadBalanced=true'
janet_coll
janet> db.janet_coll.find()
2-22
3
Support for MongoDB APIs, Operations, and
Data Types — Reference
MongoDB APIs, operations, and data types supported by Oracle Database are listed,
together with information about their support.
Unsupported MongoDB constructs raise an error. A construct that is ignored is listed in this
documentation as a no-op (it does not raise an error). A construct can be ignored because it
makes no sense or is not needed on Oracle architecture.
Note:
Only server commands are covered, not client-side wrapper functions. Client-side
wrapper functions such as deleteMany() and updateMany() use server commands
delete() and update() internally.
• Database Commands
Support of MongoDB database commands is described. This includes commands for
administration, aggregation, authentication, diagnostic, query and write operations, role
management, replication, sessions, user management, and sharding.
• Query and Projection Operators
Support of MongoDB query and projection operators is described. This includes array,
bitwise, comment, comparison, element, evaluation, geospatial, and logical query
operators, as well as projection operators.
• Update Operators
Support of MongoDB update operators is described. This includes array, bitwise, field,
and modifier update operators.
• Cursor Methods
Support of MongoDB cursor methods is described.
• Aggregation Pipeline Operators
Support of MongoDB aggregation pipeline operators is described.
• Data Types
Support of MongoDB data types is described.
• Indexes and Index Properties
Support of MongoDB indexes and index properties is described.
3-1
Chapter 3
Database Commands
See Also:
Database Commands in the MongoDB Reference manual
3-2
Chapter 3
Database Commands
Note:
Besides creating a collection with explicit use of command create, a collection is
automatically created upon its first insertion of a document. That is, to create a
collection it is sufficient to refer to it by name when inserting a document into it.
See Also:
Administration Commands in the MongoDB Reference manual
See Also:
Aggregation Commands in the MongoDB Reference manual
3-3
Chapter 3
Database Commands
See Also:
Authentication Commands in the MongoDB Reference manual
See Also:
Diagnostic Commands in the MongoDB Reference manual
3-4
Chapter 3
Database Commands
3-5
Chapter 3
Database Commands
Note:
Support for command find.
find().sort({"name":1, "age":1,
"birthday":-1}).hint({"$type":{"age":"number",
"birthday":"dateTime"}})
3-6
Chapter 3
Database Commands
Note:
Supported query operators for commands delete, find, findAndModify, and
update.
• Comparison and
logical: $eq, $gt, $gte, $in, $lt, $lte, $ne, $nin, $and, $not, $nor, and $or.
• Element and evaluation: $type, $regex, and $text.
• Geospatial: $geoIntersects, $geoWithin, $near, $nearSphere.
• Array: $all, $elemMatch.
See Also:
Query and Write Operation Commands in the MongoDB Reference manual
See Also:
Role Management Commands in the MongoDB Reference manual
3-7
Chapter 3
Database Commands
See Also:
Replication Commands in the MongoDB Reference manual
See Also:
Sessions Commands in the MongoDB Reference manual
See Also:
User Management Commands in the MongoDB Reference manual
3-8
Chapter 3
Database Commands
3-9
Chapter 3
Query and Projection Operators
See Also:
Sharding Commands in the MongoDB Reference manual
See Also:
Query and Projection Operators in the MongoDB Reference manual
See Also:
Array Query Operators in the MongoDB Reference manual
Note:
Bitwise Query Operators in the MongoDB Reference manual
3-10
Chapter 3
Query and Projection Operators
See Also:
$comment in the MongoDB Reference manual
See Also:
Comparison Query Operators in the MongoDB Reference manual
See Also:
Element Query Operators in the MongoDB Reference manual
3-11
Chapter 3
Query and Projection Operators
See Also:
Evaluation Query Operators in the MongoDB Reference manual
3-12
Chapter 3
Update Operators
See Also:
Logical Query Operators in the MongoDB Reference manual
See Also:
Projection Operators in the MongoDB Reference manual
See Also:
Update Array
3-13
Chapter 3
Update Operators
Note:
Update Bitwise in the MongoDB Reference manual
See Also:
Update Field in the MongoDB Reference manual
See Also:
Update Operators in the MongoDB Reference manual
3-14
Chapter 3
Cursor Methods
3-15
Chapter 3
Aggregation Pipeline Operators
See Also:
Cursor Methods in the MongoDB Reference manual
See Also:
Aggregation Pipeline Operators in the MongoDB Reference manual
See Also:
Arithmetic Expression Operators in the MongoDB Reference manual
3-16
Chapter 3
Aggregation Pipeline Operators
See Also:
Array Expression Operators in the MongoDB Reference manual
See Also:
Boolean Expression Operators in the MongoDB Reference manual
3-17
Chapter 3
Aggregation Pipeline Operators
See Also:
Comparison Expression Operators in the MongoDB Reference manual
See Also:
Conditional Expression Operators in the MongoDB Reference manual
3-18
Chapter 3
Aggregation Pipeline Operators
See Also:
Date Expression Operators in the MongoDB Reference manual
See Also:
Literal Expression Operator in the MongoDB Reference manual
See Also:
Object Expression Operators in the MongoDB Reference manual
3-19
Chapter 3
Aggregation Pipeline Operators
See Also:
Set Expression Operators in the MongoDB Reference manual
3-20
Chapter 3
Aggregation Pipeline Operators
See Also:
String Expression Operators in the MongoDB Reference manual
See Also:
Text Expression Operator in the MongoDB Reference manual
See Also:
Type Expression Operators in the MongoDB Reference manual
3-21
Chapter 3
Aggregation Pipeline Operators
See Also:
Aggregation Pipeline Stages in the MongoDB Reference manual
3-22
Chapter 3
Aggregation Pipeline Operators
See Also:
Accumulators ($group) and Accumulators ($project)in the MongoDB Reference
manual
See Also:
Variable Expression Operators in the MongoDB Reference manual
3-23
Chapter 3
Aggregation Pipeline Operators
See Also:
Variables in Aggregation Expressions in the MongoDB Reference manual
• LOW — Low-priority service for reporting and batch processing. Operations are not
run in parallel.
• MEDIUM — Medium-priority service for reporting and batch operations. All
operations run in parallel and are subject to queuing.
• HIGH — High-priority service for reporting and batch operations. All operations run
in parallel and are subject to queuing.
• TP — Typical service for transaction processing. Operations are not run in parallel.
• TPURGENT — Highest-priority service, for time-critical transaction processing.
Supports manual parallelism.
For example, the hint here specifies that operator $count should use service HIGH.
3-24
Chapter 3
Aggregation Pipeline Operators
Related Topics
• MongoDB Aggregation Pipeline Support
Oracle Database API for MongoDB supports MongoDB aggregation pipelines, that is,
MongoDB command aggregate. It lets you use pipeline code to execute a query as a
sequence of operations. You can also use SQL as a declarative alternative to this
procedural approach.
Here is an example that uses shell mongosh to execute, as user user100, an aggregation
pipeline with a simple $sql stage from a MongoDB client.
insertMany is used to create a collection called emps and inserts three employee documents
into it.1
user100> db.emps.insertMany([
{"ename" : "SMITH", "job" : "CLERK", "sal" : 800},
{"ename" : "ALLEN", "job" : "SALESMAN", "sal" : 1600},
{"ename" : "WARD", "job" : "SALESMAN", "sal" : 1250}
]);
{
acknowledged: true,
insertedIds: {
'0': ObjectId("6595eb06e0fc41db6de93a6d"),
'1': ObjectId("6595eb06e0fc41db6de93a6e"),
'2': ObjectId("6595eb06e0fc41db6de93a6f")
}
}
A SQL SELECT query is used to compute the average of the employee salaries for each job.
The average is computed using SQL function AVG.
The query returns two JSON objects with fields JOB and AVERAGE.
[
{ JOB: 'CLERK', AVERAGE: 800 },
1 In Oracle Database the collection is table emps with a single JSON-type column data.
3-25
Chapter 3
Aggregation Pipeline Operators
A $sql stage has the following syntax. The fields other than $sql are described in
Table 3-42.
• If $sql is the only stage in the pipeline and the pipeline has no starting collection,
then <SQL statement> can be any Oracle SQL or PL/SQL code, including SQL
data definition language (DDL) and data manipulation language (DML) code.
For example, this code uses a SQL UPDATE statement to increase the salaries of
all employees,by 10 percent:
db.aggregate([ {$sql :
{statement :
"UPDATE employees SET salary = salary * 0.1"}} ]);
3-26
Chapter 3
Aggregation Pipeline Operators
`SELECT json_mergepatch(
v.data,
JSON {'updated' : SYSTIMESTAMP})
FROM input v`},
{$out : "closed_orders"} ]);
db.closed_orders.findOne()
{
_id: ObjectId('65e8b973ca4d0a3a255794c8'),
order_id: 12382,
product: 'Autonomous Database',
status: 'closed',
updated: ISODate('2024-03-06T18:44:23.275Z')
}
• Statements that use OUT parameters or invoke stored procedures directly (see
Subprogram Parameter Modes and SQL Statements for Stored PL/SQL Units)
• Data Manipulation Language (DML) statements that use a returning clause and return
variables (see DML Returning)
All stages return zero or more JSON objects as their result. The result for a $sql stage
depends on whether or not the SQL statement executed is a SELECT statement.
• For a SELECT statement, each row in the query result set is mapped to a JSON object in
the $sql stage result. See $sql Stage Result for a SELECT Statement.
• For a non-SELECT statement, the $sql stage result is a JSON object with the single field
result, whose value indicates the number of table rows that the statement changed.
See $sql Stage Result for a Non-SELECT Statement.
3-27
Chapter 3
Aggregation Pipeline Operators
binds Field
The optional binds field in a $sql stage specifies one or more sets of SQL variable
bindings (placeholder expressions). Each binding specifies a variable used in the SQL
statement and the value to replace it with. When multiple binding sets are specified,
the statement is executed once for each set.
There are three ways to specify a single set of bindings:
• Specify a set of bindings as an object, each of whose members has a variable's
name as its field name and the variable's value as field value.
For example, here variable empno is bound to value "E123", and variable ename is
bound to value "Abdul J.".
db.aggregate([ {$sql :
{statement :
`INSERT INTO emp(empno, ename)
VALUES(:empno, :ename)`,
binds : {"empno" : "E123",
"ename" : "Abdul J."}}} ]);
db.aggregate([ {$sql :
{statement :
`INSERT INTO emp(empno, ename)
VALUES (:empno, :ename)`,
binds : [ {name : empno,
value : "E123"},
{name : "ename",
value : "Abdul J."} ] }} ]);
3-28
Chapter 3
Aggregation Pipeline Operators
db.aggregate([ {$sql :
{statement :
`INSERT INTO emp(empno, ename)
VALUES (:empno, :ename)`,
binds : [ "E123", "Abdul J." ] }} ]);
To specify multiple sets of bindings you just use an array of values that each specify a single
set of bindings. Each of the array elements can specify a binding set using any of the ways
described above: (1) an object whose members are variable name–value pairs, (2) an array
of objects with optional fields index, name, value, and dataType, (3) an array of variable
values whose array positions correspond to the variable indexes in the VALUES clause.
The following three examples illustrate this. They are semantically equivalent. The INSERT
statement of each example is executed three times:
• Once for the first set of bindings: variable :empno as "E123", and variable :ename as
"Abdul J."
• Once for the second set of bindings: variable :empno as "E456" and variable :ename as
"Elena H."
• Once for the third set of bindings: variable :empno as "E789" and variable :ename as
"Francis K."
In the first example, the array elements are objects, each of which specifies a set of bindings.
Each element of an object specifies the value of an individual (positional) binding.
db.aggregate([ {$sql :
{statement :
`INSERT INTO emp(empno, ename)
VALUES (:empno, :ename)`,
binds :
[ {"empno" : "E123", "ename" : "Abdul J."},
{"empno" : "E456", "ename" : "Elena H."},
{"empno" : "E789", "ename" : "Francis K."} ]}} ]);
In the second example, the array elements are themselves arrays, each of which specifies a
set of variable bindings. But in this case each element of the inner arrays is an object with the
fields: name and value, specifying the value of an individual (positional) binding.
db.aggregate([ {$sql :
{statement :
`INSERT INTO emp(empno, ename)
VALUES (:empno, :ename)`,
binds : [ [ {name : empno,
value : "E123"},
{name : ename,
value : "Abdul J."} ],
[ {name : empno,
value : "E456"},
3-29
Chapter 3
Aggregation Pipeline Operators
{name : ename,
value : "Elena H."} ],
[ {name : empno,
value : "E789"},
{name : ename,
value : "Francis K."} ] ]}} ]);
In the third example, the array elements are themselves arrays, each of which
specifies a set of variable bindings. Each element of the inner arrays specifies the
value of an individual (positional) binding.
db.aggregate([ {$sql :
{statement :
`INSERT INTO emp(empno, ename)
VALUES (:empno, :ename)`,
binds : [ [ "E123", "Abdul J." ],
[ "E456", "Elena H." ],
[ "E789", "Francis K." ] ]}} ]);
BSON types not listed are not supported; their use raises an error.
3-30
Chapter 3
Aggregation Pipeline Operators
Starting with Oracle Database 23ai, JSON type is supported for each of the supported BSON
types. Prior to release 23ai, an error is raised if field dataType has value JSON.
3-31
Chapter 3
Aggregation Pipeline Operators
Example 3-1 Result for SELECT Query that Returns a Single Column of JSON
Data
This example shows two queries that select columns from table dept and return a
single column of JSON data. They both use SQL construction JSON{…} to produce a
JSON-type object.
This first query uses a wildcard (*) to select all columns from table dept. The column
names are used as the resulting object field names.
Query:
Result:
This second query selects columns deptno and dname from table dept. It uses
JSON{…} to produce a JSON-type object with the column names as the values of fields
_id and name, respectively.
Query:
Result:
2 On Oracle Database 19c use this query instead: SELECT json_object(*) data FROM dept;
3 On Oracle Database 19c use this query instead: SELECT json_object('_id':deptno, 'name', dname)
data FROM dept;
3-32
Chapter 3
Aggregation Pipeline Operators
Example 3-2 Result for SELECT Query that Returns Data from Multiple Columns (Any
Types)
This example shows two queries that select columns from table dept and construct a JSON
object. (These queries do not use construction JSON{…}.)
This first query selects columns deptno, dname, and loc. The field names of the resulting
object are the aliases of the selected columns and the field values are the corresponding
column values.
Query:
Result:
This second query selects columns deptno and loc, and it uses SQL function SYSTIMESTAMP
to produce a timestamp. The query provides field names id, location, and ts for the
resulting object, instead of using the column aliases. mongosh wraps the ISO timestamp value
with the ISODate helper.
Query:
Result:
[ {id : 10,
location : 'NEW YORK',
ts : ISODate("2023-12-01T20:44:17.118Z")},
{id : 20,
location : 'DALLAS',
ts : ISODate("2023-12-01T20:44:17.118Z")},
{id : 30,
location : 'CHICAGO',
ts : ISODate("2023-12-01T20:44:17.118Z")},
{id : 40,
location : 'BOSTON',
ts : ISODate("2023-12-01T20:44:17.118Z")} ]
3-33
Chapter 3
Aggregation Pipeline Operators
by the statement (that is, inserted, deleted, or updated). When such a stage uses
multiple sets of bind variables, the result is an array of such numbers (of rows
changed).
Example 3-3, Example 3-4, Example 3-5, and Example 3-6 illustrate the result for non-
SELECT statements.
[ {result : 0} ]
Example 3-4 Result for a DML Statement That Modifies One Row
The INSERT statement in this $sql stage inserts one row, so result is 1.
[ {result : 1} ]
Example 3-5 Result for a DML Statement That Modifies Three Rows
The INSERT statement in this $sql stage inserts three rows, one for each of the three
sets of bind variables.
db.aggregate([ {$sql :
{statement : "INSERT INTO employee VALUES
(:name, :job)",
binds : [ {"name" : "John", "job" :
"Programmer"},
{"name" : "Jane", "job" :
"Manager"},
{"name" : "Francis", "job" :
"CEO"} ]}}]);
[ {result : [ 1, 1, 1 ]} ]
Example 3-6 Result for a DML Statement That Modifies Two Rows
This DELETE statement deletes two rows, so result is 2.
[ {result : 2} ]
3-34
Chapter 3
Data Types
See Also:
$type in the MongoDB Reference manual
3-35
Chapter 3
Indexes and Index Properties
Note:
You can create a suitable Oracle Database index using SQL CREATE INDEX
on the backing table of the collection. See Indexes for JSON Data.
If the field cannot ever have an array value then create a json_value
function-based index. Otherwise, use an index over a materialized view. See
JSON Query Rewrite To Use a Materialized View Over JSON_TABLE.
See Also:
Index Types in the MongoDB Reference manual
3-36
Chapter 3
Indexes and Index Properties
See Also:
Index Properties in the MongoDB Reference manual
3-37
Index
Symbols D
_id field (document identifier) data migration from MongoDB, 2-9
and primary key, 1-3, 2-14 data types, 3-35
duality views, 1-6 database commands, 3-1
supported types, 2-18 database schema, 1-3, 2-7
$sql stage, 3-25 Database Tools version, 1-2
database, definition, 1-3, 2-7
datatype field of binds value, 3-25
A dialect field, $sql stage, 3-25
aggregation pipeline document
and SQL, 2-12 conversion from BSON, 2-14
definition, 1-3 definition, 1-3
operators, 3-16 id, 2-14
application migration from MongoDB, 2-9 key, 2-14
authentication and authorization, 2-7 maximum size, 2-18
autonomous database, 1-2 document identifier field, 2-14
and primary key, 1-3
duality views, 1-6
B supported types, 2-18
binds field, $sql stage, 3-25 drivers, supported, 1-2
BSON duality views, 1-6
conversion of document, 2-14
conversion of field _id, 2-14, 2-18 E
encoding characters in a URI, 2-9
C escaping characters in a URI, 2-9
C driver version, 1-2
C# driver version, 1-2 F
collation field, 2-18
collection field order in an object, 2-18
definition, 1-3 format field, $sql stage, 3-25
mapped, 2-19
supported by a duality view, 1-6 G
collection table name, 1-5
commands, database, 3-1 Go driver version, 1-2
Compass version, 1-2
connection URI, encoding reserved characters,
2-9
H
converged database, 1-2 hint
conversion index, 2-1
BSON field _id, 2-14 SQL monitoring, 2-1
BSON scalar types, 2-14
cursor methods, 3-15
Index-1
Index
I O
id column (document identifier, 1-3, 2-14 operators
identifier field, 2-14 aggregation pipeline, 3-16
and primary key, 1-3 query and projection, 3-10
duality views, 1-6 update, 3-13
supported types, 2-18 optimizer, 2-12
in-memory column storage, 2-1 Oracle Database, differences from MongoDB,
index field of binds value, 3-25 2-18
index names, unique, 2-18 order of fields in an object, 2-18
INDEX SQL hint, 2-1 OSON format, 2-14
indexes, 1-3, 2-1, 3-35
P
J
password, in connection URI, 2-9
Java driver version, 1-2 performance improvement, 2-1
JSON database, autonomous, 1-2 pipeline, aggregation, definition, 1-3
JSON Page, Database Actions, 2-1 primary key, 1-3, 2-14
JSON scalar type conversion from BSON, 2-14 projection operators, 3-10
JSON-relational duality views, 1-6 protocol, MongoDB, 1-2
purpose of Oracle Database API for MongoDB,
1-2
K PyMongo (Python) driver version, 1-2
key
document, 2-14 Q
primary, 1-3
query expression, definition, 1-3
query operation, definition, 1-3
L query operators, 3-10
load JSON data, 2-9 query with SQL/JSON functions, 2-12
M R
mapped collections, 2-19 read and write concerns, 2-18
materialized views, 2-1 reserved characters in connection URI, 2-9
maximum document size, 2-18 resetSession field, $sql stage, 3-25
methods, cursor, 3-15 roles, 2-7
migration from MongoDB, 2-9 Ruby driver version, 1-2
MongoDB wire protocol, 1-2 Rust driver version, 1-2
MongoDB, differences from Oracle Database,
2-18 S
mongodump, 1-2
mongoexport, 1-2 scalar type conversion from BSON, 2-14
mongoimport, 1-2 schema, database, 1-3, 2-7
mongorestore, 1-2 accessing collection in different, 2-19
MongoSH version, 1-2 security, 2-7
MONITOR SQL hint, 2-1 SQL (Structured Query Language), 1-2
monitoring performance, 2-1 SQL statement, executing with $sql stage, 3-25
multitenant database, 1-2 SQL/JSON, 2-12
statement field, $sql stage, 3-25
Structured Query Language (SQL), 1-2
N
name field of binds value, 3-25
Node.js driver version, 1-2
Index-2
Index
U W
update operators, 3-13 wire protocol, MongoDB, 1-2
URI reserved characters, encoding, 2-9
Index-3