Apple wins a patent for RoomPlan that creates a 3D floor plan of a room, including dimensions and types of Furniture
In 2021 Apple filed for a patent titled "Floorplan Generation based on Room Scanning." Then in 2022, Apple introduced "RoomPlan," a new Swift API that utilizes the camera and LiDAR Scanner on iPhone and iPad to create a 3D floor plan of a room, including key characteristics such as dimensions and types of furniture.
Yesterday, Apple was granted this patent which covers devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data.
Floorplan Generation Based On Room Scanning
Apple's granted patent covers devices, systems, and methods that generate floorplans and measurements using three-dimensional (3D) representations of a physical environment. The 3D representations of the physical environment may be generated based on sensor data, such as image and depth sensor data. The generation of floorplans and measurements is facilitated in some implementations using semantically-labelled 3D representations of a physical environment. Some implementations perform semantic segmentation and labeling of 3D point clouds of a physical environment. Techniques disclosed herein may achieve various advantages by using semantic 3D representations, such as a semantically labeled 3D point cloud, encoded onto a two-dimensional (2D) lateral domain. Using semantic 3D representations in 2D lateral domains may facilitate the efficient identification of structures used to generate a floorplan or measurement.
A floorplan may be provided in various formats. In some implementations, a floorplan includes a 2D top-down view of a room. A floorplan may graphically depict a boundary of a room, e.g., by graphically depicting walls, barriers, or other limitations of the extent of a room, using lines or other graphical features. A floorplan may graphically depict the locations and geometries of wall features such as wall edges, doors, and windows. A floorplan may graphically depict objects within a room, such as couches, tables, chairs, appliances, etc. A floorplan may include identifiers that identify the boundaries, walls, doors, windows, and objects in a room, e.g., including text labels or reference numerals that identify such elements. A floorplan may include indications of measurements of boundaries, wall edges, doors, windows, and objects in a room, e.g., including numbers designating a length of a wall, a diameter of a table, a width of a window, etc.
According to some implementations, a floorplan is created based on a user performing a room scan, e.g., moving a mobile device to capture images and depth data around the user in a room. Some implementations provide a preview of a preliminary 2D floorplan during the room scanning. For example, as the user walks around a room capturing the sensor data, the user's device may display a preview of a preliminary 2D floorplan that is being generated. The preview is “live” in the sense that it is provided during the ongoing capture of the stream or set of sensor data used to generate the preliminary 2D floorplan. To enable a live preview of the preliminary 2D floorplan, the preview may be generated (at least initially) differently than a final, post-scan floorplan. In one example, the preview is generated without certain post processing techniques (e.g., fine-tuning, corner correction, etc.) that are employed to generate the final, post-scan floorplan. In other examples, a live preview may use a less computationally intensive neural network than is used to generate the final, post-scan floorplan. The use of 2D semantic data (e.g., for different layers of the room) may also facilitate making the preview determination sufficiently efficient for live display.
In some implementations, a floorplan may be generated based on separately identifying wall structures (e.g., wall edges, door, and windows) and detecting bounding boxes for objects (e.g., furniture, appliances, etc.). The wall structures and objects may be detected separately and thus using differing techniques and the results combined to generate a floorplan that represents both the wall structures and the objects.
In some implementations, a floorplan creation process identifies wall structures (e.g., wall edges) based on a 2D representation that encodes 3D semantic data in multiple layers. For example, 3D semantic data may be segmented into a plurality of horizontal layers that are used to identify where the wall edges of the room are located.
Apple's patent FIG. 1 is a block diagram of an example operating environment #100. A user could scan a room with an iPhone, iPad or Apple Vision Pro; FIG. 4 is a system flow diagram of an example generation of a semantic three-dimensional (3D) representation using 3D data and semantic segmentation based on depth and light intensity image information; FIG. 6 is a system flow diagram of an example generation of a live preview of a 2D floorplan of a physical environment based on a 3D representation of the physical environment.
For finer details, review Apple's granted patent 11715265.
Comments