- 21 Nov, 2019 1 commit
-
-
David Flynn authored
-
- 26 Aug, 2019 1 commit
-
-
David Flynn authored
-
- 14 Aug, 2019 1 commit
-
-
David Flynn authored
-
- 24 Apr, 2019 1 commit
-
-
David Flynn authored
-
- 16 Apr, 2019 5 commits
-
-
David Flynn authored
-
This commit introduces control of the quantisation step size using the familiar HEVC | AVC quantisation parameter.
-
David Flynn authored
Since the implementation has no interaction with the FixedPoint class, it makes little sense to name it so.
-
David Flynn authored
-
The ply-merge tool combines point clouds from multiple ply files into a single output with an extra per-attribute frameindex property that identifies which input frame each point belongs to. The tool is also able to reverse the process and split a merged point cloud into individual frames.
-
- 11 Feb, 2019 1 commit
-
-
David Flynn authored
-
- 06 Feb, 2019 3 commits
-
-
David Flynn authored
-
This partitioning method (--partitionMethod=2) finds the longest edge of the point cloud and divides it into --partitionNumUniformGeom=n slices along the longest edge. If n = 0, the ratio of longest edge to shortest edge determines the number of slices.
-
David Flynn authored
This commit provides a basic frame work for partitioning a frame into multiple slices, with the continued assumption of single-frame sequences. The decoder is modified to independently decode each slice and accumulate decoded points in a buffer for output. The encoder is updated to support partitioning the input point cloud into slices and to independently code each slice. Points from reconstructed slices are accumulated and output at the end of the frame period. The partitioning process (partitioning methods are defined in partitioning.cpp) proceeds as follows: - quantise the input point cloud without duplicate point removal or reordering points. - apply the partitioning function to produce a list of tiles and slices, each slice having an origin, id, and list of point indexes that identify points in the input point cloud. - producing a source point cloud for each partition as a subset of the input point cloud. - compressing each partition (slice) as normal by quantising the partitioned input. Recolouring is necessarily performed against the partitioned input since the recolouring method cannot correctly handle recolouring a partition from a complete point cloud. NB: this commit does not provide any partitioning methods.
-
- 05 Feb, 2019 1 commit
-
-
This replaces the previous floating point transform implementation with a fixed-point alternative with essentially identical compression performance.
-
- 14 Nov, 2018 1 commit
-
-
David Flynn authored
-
- 02 Nov, 2018 1 commit
-
-
David Flynn authored
-
- 31 Oct, 2018 10 commits
-
-
David Flynn authored
This commit provides an implementation of the entropy coding interface using the dirac (schroedinger) arithmetic codec. In order to handle any remaining m-ary symbols, a naïve unary binarisation is employed.
-
David Flynn authored
This commit adds an arithmetic codec interface class that allows a compile time choice of arithmetic codec implementation. Context types are renamed to support compile time selection, and existing support functions that were added to the third-party arithmetic codec are moved to the wrapper.
-
David Flynn authored
This commit provides a method to predict the child occupancy bits of a node based on the node's 26 neighbours. The prediction is used to contextualise coding of each occupancy bit. This tool requires the use of the occupancyAtlas for neighbour lookup. NB: a restriction in the current implementation requires that the atlas size is at most 8³. intra_pred_max_node_size_log2: 6
-
This commit provides an m-ary entropy coder based on a fixed-size dictionary with periodic updates, a cache of recently used symbols (updated using an LRU eviction policy), and a fallback direct binary coding of any unhandled symbols. NB: the proposed version used a context with a halving period (max_count) of 64 symbols. However, this conflicts with another adoption (512 symbols), and a wholesale replacement of the arithmetic codec and context model. To resolve the conflict, the existing halving period (128) is used.
-
This commit integrates a c++ trisoup codec, replacing the previous matlab implementation. The provided code has been reworked to avoid duplicating code, dead code, and operate with the current HLS.
-
David Flynn authored
This commit splits the handling of the geometry brick header and octree geometry coding. The encoder/decoder classes now take care of coding the header, while the geometry coder handles the geometry coding itself.
-
David Flynn authored
This commit moves method definitions out of a header file into a separate compilation unit.
-
David Flynn authored
This commit moves various constants from PCCTMC3Common.h to a new constants.h. Hard coded values of constants have been replaced with their symbolic name.
-
David Flynn authored
This commit moves method definitions out of a header file into a separate compilation unit.
-
David Flynn authored
The geometry coder is quite large, especially with trisoup and has no benefit to being a header only implementation. This commit moves the geometry octree coder out of the header files and into geometry_octree_{en,de}coder.cpp.
-
- 05 Sep, 2018 1 commit
-
-
David Flynn authored
-
- 03 Sep, 2018 1 commit
-
-
David Flynn authored
-
- 29 Aug, 2018 1 commit
-
-
David Flynn authored
-
- 20 Aug, 2018 3 commits
-
-
David Flynn authored
This commit rewrites the codec high-level syntax: - the bitstream is divided into "bricks" (akin to an AVC/HEVC slice/tile). - sequence, geometry and attribute parameter sets describe the coding parameters in use generally and for a specific brick. - marshalling the bitstream payloads to a file format is achieved using a type-length-value encoding scheme. Additionally, the triSoup bitstream scale and translation values have been unified with (replaced by) the octree counterparts. For compatibility, existing command line parameters continue to function as before. NB: this commit does not incorporate flexibility in the decoding order. The decoder requires the bitstream to be presented in a fixed order.
-
-
David Flynn authored
This commit removes the bundled Intel Threading Building Blocks library from the repository.
-
- 08 Jun, 2018 1 commit
-
-
David Flynn authored
-
- 05 Jun, 2018 4 commits
-
-
This commit does not add the triangle soup geometry compressor to TMC3, rather. Rather, it adds a means to call out to TMC1 binaries to perform the compression, integrate the results into the bitstream. Attribute compression is performed natively by the TMC13 codec.
-
This commit ports the Region Adaptive Hierarchical Transform for attribute coding with colour and reflectance variants.
-
David Flynn authored
In order to support multiple different attribute compression schemes, this commit moves the existing attribute compression code into a self-contained class.
-
David Flynn authored
In order to support multiple different attribute compression schemes, this commit moves the existing attribute compression code into a self-contained class.
-
- 23 May, 2018 1 commit
-
-
David Flynn authored
In order to avoid obscure broken builds when switching branches or otherwise adding/removing files, this commit removes the use of wild cards for source files in the tmc3 directory. To add files to or remove from the build, the tmc3/CMakeLists.txt must be modified, thereby permitting the build system to detect changes in the file list.
-
- 10 May, 2018 1 commit
-
-
David Flynn authored
-
- 09 May, 2018 1 commit
-
-
David Flynn authored
-