Transform

Definitions of Transform and Pipeline

The classes here provide functionality for creating a composition of partially reversible functions. By “partially reversible” we mean that a transform can be decoded, creating a form suitable for display. This is not necessarily identical to the original form (e.g. a transform that changes a byte tensor to a float tensor does not recreate a byte tensor when decoded, since that may lose precision, and a float tensor can be displayed already).

Classes are also provided and for composing transforms, and mapping them over collections. Pipeline is a transform which composes several Transform, knowing how to decode them or show an encoded item.

The goal of this module is to replace fastcore.Transform by using the package Plum for multiple dispatch rather than the fastcore.dispatch module. Plum is a well maintained library, that provides better dispatch functionality.

def f1(x:int): return 'int1'
def f2(x:float): return 'float2'
def f3(x:str): return 'str3' 
def f4(x:int): return 'int4'

f = Function(f1).dispatch(f1).dispatch(f2)
g = Function(f3).dispatch(f3).dispatch(f4)

h = _merge_funcs(f,g)
test_eq(h(1), 'int1')
test_eq(h('a'), 'str3')
test_eq(h(1.), 'float2')

source

Transform

 Transform (*args, **kwargs)

Delegates (__call__,decode,setup) to (encodes,decodes,setups) if split_idx matches

The main Transform features:

  • Type dispatch - Type annotations are used to determine if a transform should be applied to the given argument. It also gives an option to provide several implementations and it choses the one to run based on the type. This is useful for example when running both independent and dependent variables through the pipeline where some transforms only make sense for one and not the other. Another usecase is designing a transform that handles different data formats. Note that if a transform takes multiple arguments only the type of the first one is used for dispatch.
  • Handling of tuples - When a tuple (or a subclass of tuple) of data is passed to a transform it will get applied to each element separately. You can opt out of this behavior by passing a list or an L, as only tuples gets this specific behavior. An alternative is to use ItemTransform defined below, which will always take the input as a whole.
  • Reversability - A transform can be made reversible by implementing the decodes method. This is mainly used to turn something like a category which is encoded as a number back into a label understandable by humans for showing purposes. Like the regular call method, the decode method that is used to decode will be applied over each element of a tuple separately.
  • Type propagation - Whenever possible a transform tries to return data of the same type it received. Mainly used to maintain semantics of things like ArrayImage which is a thin wrapper of pytorch’s Tensor. You can opt out of this behavior by adding ->None return type annotation.
  • Preprocessing - The setup method can be used to perform any one-time calculations to be later used by the transform, for example generating a vocabulary to encode categorical data.
  • Filtering based on the dataset type - By setting the split_idx flag you can make the transform be used only in a specific DataSource subset like in training, but not validation.
  • Ordering - You can set the order attribute which the Pipeline uses when it needs to merge two lists of transforms.
  • Appending new behavior with decorators - You can easily extend an existing Transform by creating encodes or decodes methods for new data types. You can put those new methods outside the original transform definition and decorate them with the class you wish them patched into. This can be used by the fastai library users to add their own behavior, or multiple modules contributing to the same transform.

A realistic example

A Transform encodes (transforms) data while optionally providing a decode operation to convert back and setup function to initialize state variables used during en-/decoding.

This can be useful, for example, when encoding categorical variables in a machine learning pipeline.

The setup method can be used to track which string representation of the class maps to which integer, while the decode method helps make the outputs be human readable again.

One way to create a transform is to create a subclass of Transform:

class CatEncoder(Transform):
    c2n, n2c = None, None
    def encodes(self, x): return self.c2n[x]
    def decodes(self, x): return self.n2c[x]
    def setups(self,x):
        self.c2n = {k:v for v,k in enumerate(x)}
        self.n2c = {k:v for v,k in self.c2n.items()}

Before you use it you have to initialize it:

ce = CatEncoder()
ce
CatEncoder(enc:1,dec:1)

You can see one encode and decode method are defined.

To setup an encoder you call the setup method

x = ('a','b','c','d')
ce.setup(x)
ce.c2n, ce.n2c
({'a': 0, 'b': 1, 'c': 2, 'd': 3}, {0: 'a', 1: 'b', 2: 'c', 3: 'd'})

To encode data you call the encoder directly:

ce(x)
(0, 1, 2, 3)

To decode data you call the decode method:

ce.decode(ce(x))
('a', 'b', 'c', 'd')

Defining a Transform

There are a few ways to create a transform with different ratios of simplicity to flexibility. - Passing methods to the constructor - Instantiate the Transform class and pass your functions as enc and dec arguments. - @Transform decorator - Turn any function into a Transform by just adding a decorator - very straightforward if all you need is a single encodes implementation. - Extending the Transform class - Use inheritence to implement the methods you want. - Passing a function to fastai APIs - Same as above, but when passing a function to other transform aware classes like Pipeline or TfmdDS you don’t even need a decorator. Your function will get converted to a Transform automatically.

Passing methods to the constructor

A simple way to create a Transform is to pass a function to the constructor. In the below example, we pass an anonymous function that does integer division by 2:

f = Transform(lambda o: o*2)

If you call this transform, it will apply the transformation:

test_eq_type(f(2), 4)

@Transform decorator

You can define a Transform also by using the @Transform decorator directly.

@Transform
def f(x:str): return f"hello {x}!"
test_eq(f("Alex"), "hello Alex!")

Define with classmethod

class B:
    @classmethod
    def create(cls, x:int): return x+1
test_eq(Transform(B.create)(1), 2)

Multiple dispatch

Type dispatch

Transform uses type annotations to automatically select the appropriate implementation for different input types.

This is called multiple dispatch, or type dispatch.

The benefit is that a single Transform can handle multiple data formats without explicit conditional logic.

def enc1(x: int): return x*2
def enc2(x: str): return f"hello {x}!"

f = Transform(enc=[enc1, enc2])
f
enc1(enc:2,dec:0)
test_eq_type(f(2), 4)
test_eq(f("Alex"), "hello Alex!")

Return self if no type hint was found

If there is no valid method to which encode can dispatch, then a Transform will return the input value.

This is a useful default in the context of machine leraning pre-processing pipelines.

def enc(x:str): return "str!"
f = Transform(enc)
test_eq(f(2), 2)

Ambiguous vs NoFound lookups

A difference with fastcore.Transform is that this version is stricter about ambiguous lookups.

That’s because this version uses the plum-dispatch library which has a better underlying system for allocating the inputs to the right function.

def enc1(x: int|str): return f"INT|STR {x=}!"
def enc2(x: float|str): return f"FLOAT|STR {x=}!"

e = Transform(enc=[enc1, enc2])
e
enc1(enc:2,dec:0)
test_eq(e(5), "INT|STR x=5!")
test_eq(e(.5), "FLOAT|STR x=0.5!")
test_eq(e([1]), [1]) # NoFoundLookups returns self
try: e("hi there")  # could be either encodes function
except AmbiguousLookupError: print("Caught an expected AmbiguousLookupError")
Caught an expected AmbiguousLookupError

Type inheritance for input types is supported

You can bring your own types:

class FS(float):
    def __repr__(self): return f'FS({super().__repr__()})'
    def __str__(self): return f'{super().__str__()}'
def enc1(x: int|FS): return x/2
h = Transform(enc1)
test_eq(h(FS(5.0)), 2.5)

And type inheritance is supported

def enc1(x: int|float): return x/2
h = Transform(enc=enc1)
test_eq(h(FS(5.0)), 2.5)

Return type casting

Without any intervention it is easy for operations to change types in Python. For example, FS (defined below) becomes a float after performing multiplication:

test_eq_type(FS(3.0) * 2, 6.0)

This behavior is often not desirable when performing transformations on data. Therefore, Transform will attempt to cast the output to be of the same type as the input by default. In the below example, the output will be cast to a FS type to match the type of the input:

Without type annotations

@Transform
def f(x): return x*2

test_eq_type(f(FS(3.0)), FS(6.0))

We can optionally turn off casting by annotating the transform function with a return type of None:

Return type None

@Transform
def f(x)-> None: return x*2 # Same transform as above, but with a -> None annotation

test_eq_type(f(FS(3.0)), 6.0)  # Casting is turned off because of -> None annotation

However, Transform will only cast output back to the input type when the input is a subclass of the output. In the below example, the input is of type FS which is not a subclass of the output which is of type str. Therefore, the output doesn’t get cast back to FS and stays as type str:

@Transform
def f(x): return str(x)
    
test_eq_type(f(Float(2.)), '2.0')

Transform will attempt to convert the function output to the return type annotation.

Specific return types

If a return type annotation is given, Transform will convert it to that type:

@Transform
def f(x)->FS: return float(x)

# Output is converted to FS because its a subtype of float
test_eq(f(1.), FS(1.))

If the function returns a subclass of the annotated return type, that more specific type will be preserved since it’s already compatible with the annotation:

@Transform
def f(x)->float: return FS(x)

# FS output is kept because more specific than float
test_eq(f(1.), FS(1.))

When return types are given, the conversion will even happen if the output type is not a subclass of the return type annotation:

@Transform
def f(x)->str: return FS(x)

test_eq(f(1.), "FS(1.0)")

And here we get an expected error because it’s not possible to match the explicit return type:

@Transform
def f(x)->int: return str(x)

try: f("foo")
except Exception as e: print(f"Caught Exception: {e=}")
Caught Exception: e=ValueError("invalid literal for int() with base 10: 'foo'")

Type annotation with Decode

Just like encodes, the decodes method will cast outputs to match the input type in the same way. In the below example, the output of decodes remains of type IntSubclass:

def enc(x): return FS(x+1)
def dec(x): return x-1

f = Transform(enc,dec)
t = f(1.0)  # t will be FS
test_eq_type(f.decode(t), FS(1.0))

Transforms on Lists

Transform operates on lists as a whole, not element-wise:

def enc(x): return dict(x)
def dec(x): return list(x.items())
    
f = Transform(enc,dec)
_inp = [(1,2), (3,4)]
t = f(_inp)

test_eq(t, dict(_inp))
test_eq(f.decodes(t), _inp)

If you want a transform to operate on a list elementwise, you must implement this appropriately in the encodes and decodes methods:

def enc(x): return [x_+1 for x_ in x]
def dec(x): return [x_-1 for x_ in x]

f = Transform(enc,dec)
t = f([1,2])

test_eq(t, [2,3])
test_eq(f.decode(t), [1,2])

Transforms on Tuples

Unlike lists, Transform operates on tuples element-wise.

def neg_int(x): return -x
f = Transform(neg_int)

test_eq(f((1,2,3)), (-1,-2,-3))

Transforms will also apply TypedDispatch element-wise on tuples when an input type annotation is specified. In the below example, the values 1.0 and 3.0 are ignored because they are of type float, not int:

def neg_int(x:int): return -x
f = Transform(neg_int)

test_eq(f((1.0, 2, 3.0)), (1.0, -2, 3.0))

Another example of how Transform can use TypedDispatch with tuples is shown below:

def enc1(x: int): return x+1
def enc2(x: str): return x+'hello'
def enc3(x): return str(x)+'!'
f = Transform(enc=[enc1, enc2, enc3])

If the input is not an int or str, the third encodes method will apply:

test_eq(f([1]), '[1]!')
test_eq(f([1.0]), '[1.0]!')

However, if the input is a tuple, then the appropriate method will apply according to the type of each element in the tuple:

test_eq(f(('1',)), ('1hello',))
test_eq(f((1,2)), (2,3))
test_eq(f(('a',1.0)), ('ahello','1.0!'))

Dispatching over tuples works recursively, by the way:

def enc1(x:int): return x+1
def enc2(x:str): return x+'_hello'
def dec1(x:int): return x-1
def dec2(x:str): return x.replace('_hello', '')

f = Transform(enc=[enc1, enc2], dec=[dec1, dec2])
start = (1.,(2,'3'))
t = f(start)
test_eq_type(t, (1.,(3,'3_hello')))
test_eq(f.decode(t), start)

Dispatching also works with typing module type classes, like numbers.integral:

@Transform
def f(x:numbers.Integral): return x+1

t = f((1,'1',1))
test_eq(t, (2, '1', 2))

Transform on subsets with split_idx

def enc(x): return x+1
def dec(x): return x-1
f = Transform(enc,dec)
f.split_idx = 1

The transformations are applied when a matching split_idx parameter is passed:

test_eq(f(1, split_idx=1),2)
test_eq(f.decode(2, split_idx=1),1)

On the other hand, transformations are ignored when the split_idx parameter does not match:

test_eq(f(1, split_idx=0), 1)
test_eq(f.decode(2, split_idx=0), 2)

Extending Transform

Limitation of calling Transform directly

However in this case it is not extendible, the previous implementation gets overwritten:

@Transform
def g(x:int): return x*3

test_eq(g(2), 6)
test_eq(g('a'), 'a')  # <- resorts to returning self
test_eq(len(g.encodes.methods), 1)

For extendible Transforms take a look at the “Extending the Transform class” section below

Subclassing Transform

When you subclass Transform you can define multiple encodes as methods directly.

class A(Transform):
    def encodes(self, x:int): return x*2
    def encodes(self, x:str): return f'hello {x}!'
test_eq(len(A.encodes.methods), 2)
a = A()
test_eq(a(2), 4)
test_eq(a('Alex'), "hello Alex!")

Continued inheritance is supported

class B(A):
    def encodes(self, x:int): return x*4
    def encodes(self, x:float): return x/2
test_eq(len(B.encodes.methods), 3)
b = B()
test_eq(b(2), 8)
test_eq(b('Alex'), 'hello Alex!')
test_eq(b(5.), 2.5)

As is multiple inheritance:

class A(Transform):
    def encodes(self, x:int): return x*2
    def encodes(self, x:str): return f'hello {x}!'

class B(Transform):
    def encodes(self, x:int): return x*4
    def encodes(self, x:float): return x/2

class C(B,A):  # C is preferred of B is preferred over A
    def encodes(self, x:float): return x/4

test_eq(len(A.encodes.methods), 2)
test_eq(len(B.encodes.methods), 2)
test_eq(len(C.encodes.methods), 3)
c = C()
test_eq(c('Alex'), 'hello Alex!')  # A's str method
test_eq(c(5), 20)  # B's int method
test_eq(c(10.), 2.5)  # C's float method

Extensions with decorators

Another way to define a Transform is to extend the Transform class:

class A(Transform): pass

And then use decorators:

@A
def encodes(self, x:int): return x*2
@A
def decodes(self,x:int): return x//2
test_eq(len(A.encodes.methods),1)
test_eq(len(A.decodes.methods),1)
a = A()
test_eq(a(5),10)
test_eq(a.decode(a(5)),5)

Note that adding a method to a class (A) after instantiating the object (a):

@A
def encodes(self, x:str): return f'hello {x}!'

Will result in the method being accessible in both:

test_eq(len(A.encodes.methods),2)
test_eq(len(a.encodes.methods),2)

Predefined Transform extensions

Below are some Transforms that may be useful as reusable components

InplaceTransform


source

InplaceTransform

 InplaceTransform (*args, **kwargs)

A Transform that modifies in-place and just returns whatever it’s passed

class A(InplaceTransform): pass

@A
def encodes(self, x:pd.Series): x.fillna(10, inplace=True)
    
f = A()

test_eq_type(f(pd.Series([1,2,None])),pd.Series([1,2,10],dtype=np.float64)) #fillna fills with floats.

DisplayedTransform


source

DisplayedTransform

 DisplayedTransform (*args, **kwargs)

A transform with a __repr__ that shows its attrs

Transforms normally are represented by just their class name and a number of encodes and decodes implementations:

class A(Transform): encodes,decodes = noop,noop
f = A()
f
A(enc:2,dec:2)

A DisplayedTransform will in addition show the contents of all attributes listed in the comma-delimited string self.store_attrs:

class A(DisplayedTransform):
    encodes = noop
    def __init__(self, a, b=2):
        super().__init__()
        store_attr()
    
A(a=1,b=2)
A -- {'a': 1, 'b': 2}
(enc:2,dec:0)

ItemTransform


source

ItemTransform

 ItemTransform (*args, **kwargs)

A transform that always take tuples as items

ItemTransform is the class to use to opt out of the default behavior of Transform.

class AIT(ItemTransform): 
    def encodes(self, xy): x,y=xy; return (x+y,y)
    def decodes(self, xy): x,y=xy; return (x-y,y)
    
f = AIT()
test_eq(f((1,2)), (3,2))
test_eq(f.decode((3,2)), (1,2))

If you pass a special tuple subclass, the usual retain type behavior of Transform will keep it:

class _T(tuple): pass
x = _T((1,2))
test_eq_type(f(x), _T((3,2)))

Func


source

get_func

 get_func (t, name, *args, **kwargs)

Get the t.name (potentially partial-ized with args and kwargs) or noop if not defined

This works for any kind of t supporting getattr, so a class or a module.

test_eq(get_func(operator, 'neg', 2)(), -2)
test_eq(get_func(operator.neg, '__call__')(2), -2)
test_eq(get_func(list, 'foobar')([2]), [2])
a = [2,1]
get_func(list, 'sort')(a)
test_eq(a, [1,2])

Transforms are built with multiple-dispatch: a given function can have several methods depending on the type of the object received. This is done with the Plum module and type-annotation in Transform, but you can also use the following class.


source

Func

 Func (name, *args, **kwargs)

Basic wrapper around a name with args and kwargs to call on a given type

You can call the Func object on any module name or type, even a list of types. It will return the corresponding function (with a default to noop if nothing is found) or list of functions.

test_eq(Func('sqrt')(math), math.sqrt)

Sig

 Sig (*args, **kwargs)

Sig

Sig is just sugar-syntax to create a Func object more easily with the syntax Sig.name(*args, **kwargs).

f = Sig.sqrt()
test_eq(f(math), math.sqrt)

Pipeline

A class for composing multiple (partially) reversible transforms

Pipeline allows you to compose multiple transforms that can be partially reversed through decoding. When a transform is “decoded”, it creates a form suitable for display, though this may not be identical to the original input (for instance, a transform from bytes to floats would typically decode to floats rather than converting back to bytes, since that could lose precision).

Pipeline handles the composition of multiple transforms while maintaining the ability to decode or display the transformed items at any stage.


source

compose_tfms

 compose_tfms (x, tfms, is_enc=True, reverse=False, **kwargs)

Apply all func_nm attribute of tfms on x, maybe in reverse order

def to_int  (x):   return Int(x)
def to_float(x):   return Float(x)
def double  (x):   return x*2
def half(x)->None: return x/2
def test_compose(a, b, *fs): test_eq_type(compose_tfms(a, tfms=map(Transform,fs)), b)

test_compose(1,   Int(1),   to_int)
test_compose(1,   Float(1), to_int,to_float)
test_compose(1,   Float(2), to_int,to_float,double)
test_compose(2.0, 2.0,      to_int,double,half)
class A(Transform):
    def encodes(self, x:float):  return Float(x+1)
    def decodes(self, x): return x-1
    
tfms = [A(), Transform(math.sqrt)]
t = compose_tfms(3., tfms=tfms)
test_eq_type(t, Float(2.))
test_eq(compose_tfms(t, tfms=tfms, is_enc=False), 1.)
test_eq(compose_tfms(4., tfms=tfms, reverse=True), 3.)
tfms = [A(), Transform(math.sqrt)]
test_eq(compose_tfms((9,3.), tfms=tfms), (3,2.))

source

mk_transform

 mk_transform (f)

Convert function f to Transform if it isn’t already one


source

gather_attrs

 gather_attrs (o, k, nm)

Used in getattr to collect all attrs k from self.{nm}


source

gather_attr_names

 gather_attr_names (o, nm)

Used in dir to collect all attrs k from self.{nm}


source

Pipeline

 Pipeline (funcs=None, split_idx=None)

A pipeline of composed (for encode/decode) transforms, setup with types

add_docs(Pipeline,
         __call__="Compose `__call__` of all `fs` on `o`",
         decode="Compose `decode` of all `fs` on `o`",
         show="Show `o`, a single item from a tuple, decoding as needed",
         add="Add transforms `ts`",
         setup="Call each tfm's `setup` in order")
# Empty pipeline is noop
pipe = Pipeline()
test_eq(pipe(1), 1)
test_eq(pipe((1,)), (1,))
# Check pickle works
assert pickle.loads(pickle.dumps(pipe))
class IntFloatTfm(Transform):
    def encodes(self, x):  return Int(x)
    def decodes(self, x):  return Float(x)
    foo=1

int_tfm=IntFloatTfm()

def neg(x): return -x
neg_tfm = Transform(neg, neg)
pipe = Pipeline([neg_tfm, int_tfm])

start = 2.0
t = pipe(start)
test_eq_type(t, Int(-2))
test_eq_type(pipe.decode(t), Float(start))
test_stdout(lambda:pipe.show(t), '-2')
pipe = Pipeline([neg_tfm, int_tfm])
t = pipe(start)
test_stdout(lambda:pipe.show(pipe((1.,2.))), '-1\n-2')
test_eq(pipe.foo, 1)
assert 'foo' in dir(pipe)
assert 'int_float_tfm' in dir(pipe)

You can add a single transform or multiple transforms ts using Pipeline.add. Transforms will be ordered by Transform.order.

pipe = Pipeline([neg_tfm, int_tfm])
class SqrtTfm(Transform):
    order=-1
    def encodes(self, x): 
        return x**(.5)
    def decodes(self, x): return x**2
pipe.add(SqrtTfm())
test_eq(pipe(4),-2)
test_eq(pipe.decode(-2),4)
pipe.add([SqrtTfm(),SqrtTfm()])
test_eq(pipe(256),-2)
test_eq(pipe.decode(-2),256)

Transforms are available as attributes named with the snake_case version of the names of their types. Attributes in transforms can be directly accessed as attributes of the pipeline.

test_eq(pipe.int_float_tfm, int_tfm)
test_eq(pipe.foo, 1)

pipe = Pipeline([int_tfm, int_tfm])
pipe.int_float_tfm
test_eq(pipe.int_float_tfm[0], int_tfm)
test_eq(pipe.foo, [1,1])
# Check opposite order
pipe = Pipeline([int_tfm,neg_tfm])
t = pipe(start)
test_eq(t, -2)
test_stdout(lambda:pipe.show(t), '-2')
class A(Transform):
    def encodes(self, x):  return int(x)
    def decodes(self, x):  return Float(x)

pipe = Pipeline([neg_tfm, A])
t = pipe(start)
test_eq_type(t, -2)
test_eq_type(pipe.decode(t), Float(start))
test_stdout(lambda:pipe.show(t), '-2.0')
s2 = (1,2)
pipe = Pipeline([neg_tfm, A])
t = pipe(s2)
test_eq_type(t, (-1,-2))
test_eq_type(pipe.decode(t), (Float(1.),Float(2.)))
test_stdout(lambda:pipe.show(t), '-1.0\n-2.0')
from PIL import Image
class ArrayImage(ndarray):
    _show_args = {'cmap':'viridis'}
    def __new__(cls, x, *args, **kwargs):
        if isinstance(x,tuple): super().__new__(cls, x, *args, **kwargs)
        if args or kwargs: raise RuntimeError('Unknown array init args')
        if not isinstance(x,ndarray): x = array(x)
        return x.view(cls)
    
    def show(self, ctx=None, figsize=None, **kwargs):
        if ctx is None: _,ctx = plt.subplots(figsize=figsize)
        ctx.imshow(im, **{**self._show_args, **kwargs})
        ctx.axis('off')
        return ctx
    
im = Image.open(TEST_IMAGE)
im_t = ArrayImage(im)
def f1(x:ArrayImage): return -x
def f2(x): return Image.open(x).resize((128,128))
def f3(x:Image.Image): return(ArrayImage(array(x)))
pipe = Pipeline([f2,f3,f1])
t = pipe(TEST_IMAGE)
test_eq(type(t), ArrayImage)
test_eq(t, -array(f3(f2(TEST_IMAGE))))
pipe = Pipeline([f2,f3])
t = pipe(TEST_IMAGE)
ax = pipe.show(t)

class A(Transform):
    def encodes(self, x):  return int(x)
    def decodes(self, x):  return Float(x)
    
class B(Transform):
    def encodes(self, x:int): return x+1
    def encodes(self, x:str): return x+'_hello'
    def decodes(self, x:int): return x-1
    def decodes(self, x:str): return x.replace('_hello', '')
#Check filtering is properly applied
add1 = B()
add1.split_idx = 1
pipe = Pipeline([neg_tfm, A(), add1])
test_eq(pipe(start), -2)
pipe.split_idx=1
test_eq(pipe(start), -1)
pipe.split_idx=0
test_eq(pipe(start), -2)
for t in [None, 0, 1]:
    pipe.split_idx=t
    test_eq(pipe.decode(pipe(start)), start)
    test_stdout(lambda: pipe.show(pipe(start)), "-2.0")
def neg(x): return -x
test_eq(type(mk_transform(neg)), Transform)
test_eq(type(mk_transform(math.sqrt)), Transform)
test_eq(type(mk_transform(lambda a:a*2)), Transform)
test_eq(type(mk_transform(Pipeline([neg]))), Pipeline)