def f1(x:int): return 'int1'
def f2(x:float): return 'float2'
def f3(x:str): return 'str3'
def f4(x:int): return 'int4'
= Function(f1).dispatch(f1).dispatch(f2)
f = Function(f3).dispatch(f3).dispatch(f4)
g
= _merge_funcs(f,g)
h 1), 'int1')
test_eq(h('a'), 'str3')
test_eq(h(1.), 'float2') test_eq(h(
Transform
Transform
and Pipeline
The classes here provide functionality for creating a composition of partially reversible functions. By “partially reversible” we mean that a transform can be decode
d, creating a form suitable for display. This is not necessarily identical to the original form (e.g. a transform that changes a byte tensor to a float tensor does not recreate a byte tensor when decoded, since that may lose precision, and a float tensor can be displayed already).
Classes are also provided and for composing transforms, and mapping them over collections. Pipeline
is a transform which composes several Transform
, knowing how to decode them or show an encoded item.
The goal of this module is to replace fastcore.Transform
by using the package Plum for multiple dispatch rather than the fastcore.dispatch
module. Plum is a well maintained library, that provides better dispatch functionality.
Transform
Transform (*args, **kwargs)
Delegates (__call__
,decode
,setup
) to (encodes
,decodes
,setups
) if split_idx
matches
The main Transform
features:
- Type dispatch - Type annotations are used to determine if a transform should be applied to the given argument. It also gives an option to provide several implementations and it choses the one to run based on the type. This is useful for example when running both independent and dependent variables through the pipeline where some transforms only make sense for one and not the other. Another usecase is designing a transform that handles different data formats. Note that if a transform takes multiple arguments only the type of the first one is used for dispatch.
- Handling of tuples - When a tuple (or a subclass of tuple) of data is passed to a transform it will get applied to each element separately. You can opt out of this behavior by passing a list or an
L
, as only tuples gets this specific behavior. An alternative is to useItemTransform
defined below, which will always take the input as a whole. - Reversability - A transform can be made reversible by implementing the
decodes
method. This is mainly used to turn something like a category which is encoded as a number back into a label understandable by humans for showing purposes. Like the regular call method, thedecode
method that is used to decode will be applied over each element of a tuple separately. - Type propagation - Whenever possible a transform tries to return data of the same type it received. Mainly used to maintain semantics of things like
ArrayImage
which is a thin wrapper of pytorch’sTensor
. You can opt out of this behavior by adding->None
return type annotation. - Preprocessing - The
setup
method can be used to perform any one-time calculations to be later used by the transform, for example generating a vocabulary to encode categorical data. - Filtering based on the dataset type - By setting the
split_idx
flag you can make the transform be used only in a specificDataSource
subset like in training, but not validation. - Ordering - You can set the
order
attribute which thePipeline
uses when it needs to merge two lists of transforms. - Appending new behavior with decorators - You can easily extend an existing
Transform
by creatingencodes
ordecodes
methods for new data types. You can put those new methods outside the original transform definition and decorate them with the class you wish them patched into. This can be used by the fastai library users to add their own behavior, or multiple modules contributing to the same transform.
A realistic example
A Transform encodes (transforms) data while optionally providing a decode
operation to convert back and setup
function to initialize state variables used during en-/decoding.
This can be useful, for example, when encoding categorical variables in a machine learning pipeline.
The setup
method can be used to track which string representation of the class maps to which integer, while the decode
method helps make the outputs be human readable again.
One way to create a transform is to create a subclass of Transform:
class CatEncoder(Transform):
= None, None
c2n, n2c def encodes(self, x): return self.c2n[x]
def decodes(self, x): return self.n2c[x]
def setups(self,x):
self.c2n = {k:v for v,k in enumerate(x)}
self.n2c = {k:v for v,k in self.c2n.items()}
Before you use it you have to initialize it:
= CatEncoder()
ce ce
CatEncoder(enc:1,dec:1)
You can see one encode and decode method are defined.
To setup an encoder you call the setup
method
= ('a','b','c','d')
x
ce.setup(x) ce.c2n, ce.n2c
({'a': 0, 'b': 1, 'c': 2, 'd': 3}, {0: 'a', 1: 'b', 2: 'c', 3: 'd'})
To encode data you call the encoder directly:
ce(x)
(0, 1, 2, 3)
To decode data you call the decode
method:
ce.decode(ce(x))
('a', 'b', 'c', 'd')
Defining a Transform
There are a few ways to create a transform with different ratios of simplicity to flexibility. - Passing methods to the constructor - Instantiate the Transform
class and pass your functions as enc
and dec
arguments. - @Transform decorator - Turn any function into a Transform
by just adding a decorator - very straightforward if all you need is a single encodes
implementation. - Extending the Transform
class - Use inheritence to implement the methods you want. - Passing a function to fastai APIs - Same as above, but when passing a function to other transform aware classes like Pipeline
or TfmdDS
you don’t even need a decorator. Your function will get converted to a Transform
automatically.
Passing methods to the constructor
A simple way to create a Transform
is to pass a function to the constructor. In the below example, we pass an anonymous function that does integer division by 2:
= Transform(lambda o: o*2) f
If you call this transform, it will apply the transformation:
2), 4) test_eq_type(f(
@Transform decorator
You can define a Transform also by using the @Transform
decorator directly.
@Transform
def f(x:str): return f"hello {x}!"
"Alex"), "hello Alex!") test_eq(f(
Define with classmethod
class B:
@classmethod
def create(cls, x:int): return x+1
1), 2) test_eq(Transform(B.create)(
Multiple dispatch
Type dispatch
Transform
uses type annotations to automatically select the appropriate implementation for different input types.
This is called multiple dispatch, or type dispatch.
The benefit is that a single Transform can handle multiple data formats without explicit conditional logic.
def enc1(x: int): return x*2
def enc2(x: str): return f"hello {x}!"
= Transform(enc=[enc1, enc2])
f f
enc1(enc:2,dec:0)
2), 4)
test_eq_type(f("Alex"), "hello Alex!") test_eq(f(
Return self if no type hint was found
If there is no valid method to which encode can dispatch, then a Transform will return the input value.
This is a useful default in the context of machine leraning pre-processing pipelines.
def enc(x:str): return "str!"
= Transform(enc)
f 2), 2) test_eq(f(
Ambiguous vs NoFound lookups
A difference with fastcore.Transform
is that this version is stricter about ambiguous lookups.
That’s because this version uses the plum-dispatch
library which has a better underlying system for allocating the inputs to the right function.
def enc1(x: int|str): return f"INT|STR {x=}!"
def enc2(x: float|str): return f"FLOAT|STR {x=}!"
= Transform(enc=[enc1, enc2])
e e
enc1(enc:2,dec:0)
5), "INT|STR x=5!")
test_eq(e(.5), "FLOAT|STR x=0.5!")
test_eq(e(1]), [1]) # NoFoundLookups returns self test_eq(e([
try: e("hi there") # could be either encodes function
except AmbiguousLookupError: print("Caught an expected AmbiguousLookupError")
Caught an expected AmbiguousLookupError
Type inheritance for input types is supported
You can bring your own types:
class FS(float):
def __repr__(self): return f'FS({super().__repr__()})'
def __str__(self): return f'{super().__str__()}'
def enc1(x: int|FS): return x/2
= Transform(enc1)
h 5.0)), 2.5) test_eq(h(FS(
And type inheritance is supported
def enc1(x: int|float): return x/2
= Transform(enc=enc1)
h 5.0)), 2.5) test_eq(h(FS(
Return type casting
Without any intervention it is easy for operations to change types in Python. For example, FS
(defined below) becomes a float
after performing multiplication:
3.0) * 2, 6.0) test_eq_type(FS(
This behavior is often not desirable when performing transformations on data. Therefore, Transform
will attempt to cast the output to be of the same type as the input by default. In the below example, the output will be cast to a FS
type to match the type of the input:
Without type annotations
@Transform
def f(x): return x*2
3.0)), FS(6.0)) test_eq_type(f(FS(
We can optionally turn off casting by annotating the transform function with a return type of None:
Return type None
@Transform
def f(x)-> None: return x*2 # Same transform as above, but with a -> None annotation
3.0)), 6.0) # Casting is turned off because of -> None annotation test_eq_type(f(FS(
However, Transform will only cast output back to the input type when the input is a subclass of the output. In the below example, the input is of type FS
which is not a subclass of the output which is of type str. Therefore, the output doesn’t get cast back to FS
and stays as type str:
@Transform
def f(x): return str(x)
2.)), '2.0') test_eq_type(f(Float(
Transform will attempt to convert the function output to the return type annotation.
Specific return types
If a return type annotation is given, Transform will convert it to that type:
@Transform
def f(x)->FS: return float(x)
# Output is converted to FS because its a subtype of float
1.), FS(1.)) test_eq(f(
If the function returns a subclass of the annotated return type, that more specific type will be preserved since it’s already compatible with the annotation:
@Transform
def f(x)->float: return FS(x)
# FS output is kept because more specific than float
1.), FS(1.)) test_eq(f(
When return types are given, the conversion will even happen if the output type is not a subclass of the return type annotation:
@Transform
def f(x)->str: return FS(x)
1.), "FS(1.0)") test_eq(f(
And here we get an expected error because it’s not possible to match the explicit return type:
@Transform
def f(x)->int: return str(x)
try: f("foo")
except Exception as e: print(f"Caught Exception: {e=}")
Caught Exception: e=ValueError("invalid literal for int() with base 10: 'foo'")
Type annotation with Decode
Just like encodes, the decodes method will cast outputs to match the input type in the same way. In the below example, the output of decodes remains of type IntSubclass:
def enc(x): return FS(x+1)
def dec(x): return x-1
= Transform(enc,dec)
f = f(1.0) # t will be FS
t 1.0)) test_eq_type(f.decode(t), FS(
Transforms on Lists
Transform operates on lists as a whole, not element-wise:
def enc(x): return dict(x)
def dec(x): return list(x.items())
= Transform(enc,dec)
f = [(1,2), (3,4)]
_inp = f(_inp)
t
dict(_inp))
test_eq(t, test_eq(f.decodes(t), _inp)
If you want a transform to operate on a list elementwise, you must implement this appropriately in the encodes and decodes methods:
def enc(x): return [x_+1 for x_ in x]
def dec(x): return [x_-1 for x_ in x]
= Transform(enc,dec)
f = f([1,2])
t
2,3])
test_eq(t, [1,2]) test_eq(f.decode(t), [
Transforms on Tuples
Unlike lists, Transform operates on tuples element-wise.
def neg_int(x): return -x
= Transform(neg_int)
f
1,2,3)), (-1,-2,-3)) test_eq(f((
Transforms will also apply TypedDispatch element-wise on tuples when an input type annotation is specified. In the below example, the values 1.0 and 3.0 are ignored because they are of type float, not int:
def neg_int(x:int): return -x
= Transform(neg_int)
f
1.0, 2, 3.0)), (1.0, -2, 3.0)) test_eq(f((
Another example of how Transform can use TypedDispatch with tuples is shown below:
def enc1(x: int): return x+1
def enc2(x: str): return x+'hello'
def enc3(x): return str(x)+'!'
= Transform(enc=[enc1, enc2, enc3]) f
If the input is not an int or str, the third encodes method will apply:
1]), '[1]!')
test_eq(f([1.0]), '[1.0]!') test_eq(f([
However, if the input is a tuple, then the appropriate method will apply according to the type of each element in the tuple:
'1',)), ('1hello',))
test_eq(f((1,2)), (2,3))
test_eq(f(('a',1.0)), ('ahello','1.0!')) test_eq(f((
Dispatching over tuples works recursively, by the way:
def enc1(x:int): return x+1
def enc2(x:str): return x+'_hello'
def dec1(x:int): return x-1
def dec2(x:str): return x.replace('_hello', '')
= Transform(enc=[enc1, enc2], dec=[dec1, dec2])
f = (1.,(2,'3'))
start = f(start)
t 1.,(3,'3_hello')))
test_eq_type(t, ( test_eq(f.decode(t), start)
Dispatching also works with typing module type classes, like numbers.integral:
@Transform
def f(x:numbers.Integral): return x+1
= f((1,'1',1))
t 2, '1', 2)) test_eq(t, (
Transform on subsets with split_idx
def enc(x): return x+1
def dec(x): return x-1
= Transform(enc,dec)
f = 1 f.split_idx
The transformations are applied when a matching split_idx parameter is passed:
1, split_idx=1),2)
test_eq(f(2, split_idx=1),1) test_eq(f.decode(
On the other hand, transformations are ignored when the split_idx parameter does not match:
1, split_idx=0), 1)
test_eq(f(2, split_idx=0), 2) test_eq(f.decode(
Extending Transform
Limitation of calling Transform directly
However in this case it is not extendible, the previous implementation gets overwritten:
@Transform
def g(x:int): return x*3
2), 6)
test_eq(g('a'), 'a') # <- resorts to returning self
test_eq(g(len(g.encodes.methods), 1) test_eq(
For extendible Transforms take a look at the “Extending the Transform class” section below
Subclassing Transform
When you subclass Transform you can define multiple encodes as methods directly.
class A(Transform):
def encodes(self, x:int): return x*2
def encodes(self, x:str): return f'hello {x}!'
len(A.encodes.methods), 2) test_eq(
= A()
a 2), 4)
test_eq(a('Alex'), "hello Alex!") test_eq(a(
Continued inheritance is supported
class B(A):
def encodes(self, x:int): return x*4
def encodes(self, x:float): return x/2
len(B.encodes.methods), 3) test_eq(
= B()
b 2), 8)
test_eq(b('Alex'), 'hello Alex!')
test_eq(b(5.), 2.5) test_eq(b(
As is multiple inheritance:
class A(Transform):
def encodes(self, x:int): return x*2
def encodes(self, x:str): return f'hello {x}!'
class B(Transform):
def encodes(self, x:int): return x*4
def encodes(self, x:float): return x/2
class C(B,A): # C is preferred of B is preferred over A
def encodes(self, x:float): return x/4
len(A.encodes.methods), 2)
test_eq(len(B.encodes.methods), 2)
test_eq(len(C.encodes.methods), 3) test_eq(
= C()
c 'Alex'), 'hello Alex!') # A's str method
test_eq(c(5), 20) # B's int method
test_eq(c(10.), 2.5) # C's float method test_eq(c(
Extensions with decorators
Another way to define a Transform is to extend the Transform
class:
class A(Transform): pass
And then use decorators:
@A
def encodes(self, x:int): return x*2
@A
def decodes(self,x:int): return x//2
len(A.encodes.methods),1)
test_eq(len(A.decodes.methods),1) test_eq(
= A() a
5),10)
test_eq(a(5)),5) test_eq(a.decode(a(
Note that adding a method to a class (A) after instantiating the object (a):
@A
def encodes(self, x:str): return f'hello {x}!'
Will result in the method being accessible in both:
len(A.encodes.methods),2)
test_eq(len(a.encodes.methods),2) test_eq(
Predefined Transform extensions
Below are some Transforms that may be useful as reusable components
InplaceTransform
InplaceTransform
InplaceTransform (*args, **kwargs)
A Transform
that modifies in-place and just returns whatever it’s passed
class A(InplaceTransform): pass
@A
def encodes(self, x:pd.Series): x.fillna(10, inplace=True)
= A()
f
1,2,None])),pd.Series([1,2,10],dtype=np.float64)) #fillna fills with floats. test_eq_type(f(pd.Series([
DisplayedTransform
DisplayedTransform
DisplayedTransform (*args, **kwargs)
A transform with a __repr__
that shows its attrs
Transforms normally are represented by just their class name and a number of encodes and decodes implementations:
class A(Transform): encodes,decodes = noop,noop
= A()
f f
A(enc:2,dec:2)
A DisplayedTransform will in addition show the contents of all attributes listed in the comma-delimited string self.store_attrs:
class A(DisplayedTransform):
= noop
encodes def __init__(self, a, b=2):
super().__init__()
store_attr()
=1,b=2) A(a
A -- {'a': 1, 'b': 2}
(enc:2,dec:0)
ItemTransform
ItemTransform
ItemTransform (*args, **kwargs)
A transform that always take tuples as items
ItemTransform is the class to use to opt out of the default behavior of Transform.
class AIT(ItemTransform):
def encodes(self, xy): x,y=xy; return (x+y,y)
def decodes(self, xy): x,y=xy; return (x-y,y)
= AIT()
f 1,2)), (3,2))
test_eq(f((3,2)), (1,2)) test_eq(f.decode((
If you pass a special tuple subclass, the usual retain type behavior of Transform will keep it:
class _T(tuple): pass
= _T((1,2))
x 3,2))) test_eq_type(f(x), _T((
Func
get_func
get_func (t, name, *args, **kwargs)
Get the t.name
(potentially partial-ized with args
and kwargs
) or noop
if not defined
This works for any kind of t supporting getattr, so a class or a module.
'neg', 2)(), -2)
test_eq(get_func(operator, '__call__')(2), -2)
test_eq(get_func(operator.neg, list, 'foobar')([2]), [2])
test_eq(get_func(= [2,1]
a list, 'sort')(a)
get_func(1,2]) test_eq(a, [
Transforms are built with multiple-dispatch: a given function can have several methods depending on the type of the object received. This is done with the Plum module and type-annotation in Transform, but you can also use the following class.
Func
Func (name, *args, **kwargs)
Basic wrapper around a name
with args
and kwargs
to call on a given type
You can call the Func object on any module name or type, even a list of types. It will return the corresponding function (with a default to noop if nothing is found) or list of functions.
'sqrt')(math), math.sqrt) test_eq(Func(
Sig
Sig (*args, **kwargs)
Sig
Sig is just sugar-syntax to create a Func object more easily with the syntax Sig.name(*args, **kwargs).
= Sig.sqrt()
f test_eq(f(math), math.sqrt)
Pipeline
A class for composing multiple (partially) reversible transforms
Pipeline
allows you to compose multiple transforms that can be partially reversed through decoding. When a transform is “decoded”, it creates a form suitable for display, though this may not be identical to the original input (for instance, a transform from bytes to floats would typically decode to floats rather than converting back to bytes, since that could lose precision).
Pipeline
handles the composition of multiple transforms while maintaining the ability to decode or display the transformed items at any stage.
compose_tfms
compose_tfms (x, tfms, is_enc=True, reverse=False, **kwargs)
Apply all func_nm
attribute of tfms
on x
, maybe in reverse
order
def to_int (x): return Int(x)
def to_float(x): return Float(x)
def double (x): return x*2
def half(x)->None: return x/2
def test_compose(a, b, *fs): test_eq_type(compose_tfms(a, tfms=map(Transform,fs)), b)
1, Int(1), to_int)
test_compose(1, Float(1), to_int,to_float)
test_compose(1, Float(2), to_int,to_float,double)
test_compose(2.0, 2.0, to_int,double,half) test_compose(
class A(Transform):
def encodes(self, x:float): return Float(x+1)
def decodes(self, x): return x-1
= [A(), Transform(math.sqrt)]
tfms = compose_tfms(3., tfms=tfms)
t 2.))
test_eq_type(t, Float(=tfms, is_enc=False), 1.)
test_eq(compose_tfms(t, tfms4., tfms=tfms, reverse=True), 3.) test_eq(compose_tfms(
= [A(), Transform(math.sqrt)]
tfms 9,3.), tfms=tfms), (3,2.)) test_eq(compose_tfms((
mk_transform
mk_transform (f)
Convert function f
to Transform
if it isn’t already one
gather_attrs
gather_attrs (o, k, nm)
Used in getattr to collect all attrs k
from self.{nm}
gather_attr_names
gather_attr_names (o, nm)
Used in dir to collect all attrs k
from self.{nm}
Pipeline
Pipeline (funcs=None, split_idx=None)
A pipeline of composed (for encode/decode) transforms, setup with types
add_docs(Pipeline,__call__="Compose `__call__` of all `fs` on `o`",
="Compose `decode` of all `fs` on `o`",
decode="Show `o`, a single item from a tuple, decoding as needed",
show="Add transforms `ts`",
add="Call each tfm's `setup` in order") setup
# Empty pipeline is noop
= Pipeline()
pipe 1), 1)
test_eq(pipe(1,)), (1,))
test_eq(pipe((# Check pickle works
assert pickle.loads(pickle.dumps(pipe))
class IntFloatTfm(Transform):
def encodes(self, x): return Int(x)
def decodes(self, x): return Float(x)
=1
foo
=IntFloatTfm()
int_tfm
def neg(x): return -x
= Transform(neg, neg) neg_tfm
= Pipeline([neg_tfm, int_tfm])
pipe
= 2.0
start = pipe(start)
t -2))
test_eq_type(t, Int(
test_eq_type(pipe.decode(t), Float(start))lambda:pipe.show(t), '-2') test_stdout(
= Pipeline([neg_tfm, int_tfm])
pipe = pipe(start)
t lambda:pipe.show(pipe((1.,2.))), '-1\n-2')
test_stdout(1)
test_eq(pipe.foo, assert 'foo' in dir(pipe)
assert 'int_float_tfm' in dir(pipe)
You can add a single transform or multiple transforms ts using Pipeline.add. Transforms will be ordered by Transform.order.
= Pipeline([neg_tfm, int_tfm])
pipe class SqrtTfm(Transform):
=-1
orderdef encodes(self, x):
return x**(.5)
def decodes(self, x): return x**2
pipe.add(SqrtTfm())4),-2)
test_eq(pipe(-2),4)
test_eq(pipe.decode(
pipe.add([SqrtTfm(),SqrtTfm()])256),-2)
test_eq(pipe(-2),256) test_eq(pipe.decode(
Transforms are available as attributes named with the snake_case version of the names of their types. Attributes in transforms can be directly accessed as attributes of the pipeline.
test_eq(pipe.int_float_tfm, int_tfm)1)
test_eq(pipe.foo,
= Pipeline([int_tfm, int_tfm])
pipe
pipe.int_float_tfm0], int_tfm)
test_eq(pipe.int_float_tfm[1,1]) test_eq(pipe.foo, [
# Check opposite order
= Pipeline([int_tfm,neg_tfm])
pipe = pipe(start)
t -2)
test_eq(t, lambda:pipe.show(t), '-2') test_stdout(
class A(Transform):
def encodes(self, x): return int(x)
def decodes(self, x): return Float(x)
= Pipeline([neg_tfm, A])
pipe = pipe(start)
t -2)
test_eq_type(t,
test_eq_type(pipe.decode(t), Float(start))lambda:pipe.show(t), '-2.0') test_stdout(
= (1,2)
s2 = Pipeline([neg_tfm, A])
pipe = pipe(s2)
t -1,-2))
test_eq_type(t, (1.),Float(2.)))
test_eq_type(pipe.decode(t), (Float(lambda:pipe.show(t), '-1.0\n-2.0') test_stdout(
from PIL import Image
class ArrayImage(ndarray):
= {'cmap':'viridis'}
_show_args def __new__(cls, x, *args, **kwargs):
if isinstance(x,tuple): super().__new__(cls, x, *args, **kwargs)
if args or kwargs: raise RuntimeError('Unknown array init args')
if not isinstance(x,ndarray): x = array(x)
return x.view(cls)
def show(self, ctx=None, figsize=None, **kwargs):
if ctx is None: _,ctx = plt.subplots(figsize=figsize)
**{**self._show_args, **kwargs})
ctx.imshow(im, 'off')
ctx.axis(return ctx
= Image.open(TEST_IMAGE)
im = ArrayImage(im) im_t
def f1(x:ArrayImage): return -x
def f2(x): return Image.open(x).resize((128,128))
def f3(x:Image.Image): return(ArrayImage(array(x)))
= Pipeline([f2,f3,f1])
pipe = pipe(TEST_IMAGE)
t type(t), ArrayImage)
test_eq(-array(f3(f2(TEST_IMAGE)))) test_eq(t,
= Pipeline([f2,f3])
pipe = pipe(TEST_IMAGE)
t = pipe.show(t) ax
class A(Transform):
def encodes(self, x): return int(x)
def decodes(self, x): return Float(x)
class B(Transform):
def encodes(self, x:int): return x+1
def encodes(self, x:str): return x+'_hello'
def decodes(self, x:int): return x-1
def decodes(self, x:str): return x.replace('_hello', '')
#Check filtering is properly applied
= B()
add1 = 1
add1.split_idx = Pipeline([neg_tfm, A(), add1])
pipe -2)
test_eq(pipe(start), =1
pipe.split_idx-1)
test_eq(pipe(start), =0
pipe.split_idx-2)
test_eq(pipe(start), for t in [None, 0, 1]:
=t
pipe.split_idx
test_eq(pipe.decode(pipe(start)), start)lambda: pipe.show(pipe(start)), "-2.0") test_stdout(
def neg(x): return -x
type(mk_transform(neg)), Transform)
test_eq(type(mk_transform(math.sqrt)), Transform)
test_eq(type(mk_transform(lambda a:a*2)), Transform)
test_eq(type(mk_transform(Pipeline([neg]))), Pipeline) test_eq(